2025-05-25 03:00:44.866580 | Job console starting 2025-05-25 03:00:44.884814 | Updating git repos 2025-05-25 03:00:45.341345 | Cloning repos into workspace 2025-05-25 03:00:46.018373 | Restoring repo states 2025-05-25 03:00:46.095029 | Merging changes 2025-05-25 03:00:46.095052 | Checking out repos 2025-05-25 03:00:47.231248 | Preparing playbooks 2025-05-25 03:00:49.143978 | Running Ansible setup 2025-05-25 03:00:56.540570 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-25 03:00:57.767361 | 2025-05-25 03:00:57.767493 | PLAY [Base pre] 2025-05-25 03:00:57.787793 | 2025-05-25 03:00:57.787899 | TASK [Setup log path fact] 2025-05-25 03:00:57.805420 | orchestrator | ok 2025-05-25 03:00:57.819274 | 2025-05-25 03:00:57.819380 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-25 03:00:57.847274 | orchestrator | ok 2025-05-25 03:00:57.857790 | 2025-05-25 03:00:57.857877 | TASK [emit-job-header : Print job information] 2025-05-25 03:00:57.895979 | # Job Information 2025-05-25 03:00:57.896119 | Ansible Version: 2.16.14 2025-05-25 03:00:57.896147 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-05-25 03:00:57.896175 | Pipeline: periodic-daily 2025-05-25 03:00:57.896193 | Executor: 521e9411259a 2025-05-25 03:00:57.896210 | Triggered by: https://github.com/osism/testbed 2025-05-25 03:00:57.896228 | Event ID: 49477f2cdff444439baf6f879eb3658c 2025-05-25 03:00:57.903476 | 2025-05-25 03:00:57.903620 | LOOP [emit-job-header : Print node information] 2025-05-25 03:00:58.012403 | orchestrator | ok: 2025-05-25 03:00:58.012584 | orchestrator | # Node Information 2025-05-25 03:00:58.012616 | orchestrator | Inventory Hostname: orchestrator 2025-05-25 03:00:58.012637 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-25 03:00:58.012655 | orchestrator | Username: zuul-testbed02 2025-05-25 03:00:58.012672 | orchestrator | Distro: Debian 12.11 2025-05-25 03:00:58.012692 | orchestrator | Provider: static-testbed 2025-05-25 03:00:58.012709 | orchestrator | Region: 2025-05-25 03:00:58.012726 | orchestrator | Label: testbed-orchestrator 2025-05-25 03:00:58.012742 | orchestrator | Product Name: OpenStack Nova 2025-05-25 03:00:58.012758 | orchestrator | Interface IP: 81.163.193.140 2025-05-25 03:00:58.023464 | 2025-05-25 03:00:58.023581 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-25 03:00:58.473297 | orchestrator -> localhost | changed 2025-05-25 03:00:58.486352 | 2025-05-25 03:00:58.486498 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-25 03:00:59.584599 | orchestrator -> localhost | changed 2025-05-25 03:00:59.598168 | 2025-05-25 03:00:59.598282 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-25 03:00:59.926869 | orchestrator -> localhost | ok 2025-05-25 03:00:59.934882 | 2025-05-25 03:00:59.934995 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-25 03:00:59.963376 | orchestrator | ok 2025-05-25 03:00:59.980183 | orchestrator | included: /var/lib/zuul/builds/75d22ebf6c3e48d2a89c8d4ea630ef96/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-25 03:00:59.988199 | 2025-05-25 03:00:59.988305 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-25 03:01:01.136279 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-25 03:01:01.137290 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/75d22ebf6c3e48d2a89c8d4ea630ef96/work/75d22ebf6c3e48d2a89c8d4ea630ef96_id_rsa 2025-05-25 03:01:01.137833 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/75d22ebf6c3e48d2a89c8d4ea630ef96/work/75d22ebf6c3e48d2a89c8d4ea630ef96_id_rsa.pub 2025-05-25 03:01:01.138034 | orchestrator -> localhost | The key fingerprint is: 2025-05-25 03:01:01.138154 | orchestrator -> localhost | SHA256:UJ2uO1m0VyhvJ+Jm/Sxx/ELiZtUZs0+fr9a4+5Jed9U zuul-build-sshkey 2025-05-25 03:01:01.138217 | orchestrator -> localhost | The key's randomart image is: 2025-05-25 03:01:01.138303 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-25 03:01:01.138358 | orchestrator -> localhost | | .. . | 2025-05-25 03:01:01.138411 | orchestrator -> localhost | | . o | 2025-05-25 03:01:01.138458 | orchestrator -> localhost | | . . . | 2025-05-25 03:01:01.138552 | orchestrator -> localhost | | . + . .o.| 2025-05-25 03:01:01.138603 | orchestrator -> localhost | | So + o .E| 2025-05-25 03:01:01.138675 | orchestrator -> localhost | | . + B *o+| 2025-05-25 03:01:01.138723 | orchestrator -> localhost | | = * O *B| 2025-05-25 03:01:01.138768 | orchestrator -> localhost | | + + *.*oB| 2025-05-25 03:01:01.138818 | orchestrator -> localhost | | + o =BB+| 2025-05-25 03:01:01.138925 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-25 03:01:01.139058 | orchestrator -> localhost | ok: Runtime: 0:00:00.448364 2025-05-25 03:01:01.154189 | 2025-05-25 03:01:01.154455 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-25 03:01:01.209300 | orchestrator | ok 2025-05-25 03:01:01.225604 | orchestrator | included: /var/lib/zuul/builds/75d22ebf6c3e48d2a89c8d4ea630ef96/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-25 03:01:01.236722 | 2025-05-25 03:01:01.236887 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-25 03:01:01.272875 | orchestrator | skipping: Conditional result was False 2025-05-25 03:01:01.282010 | 2025-05-25 03:01:01.282156 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-25 03:01:02.027326 | orchestrator | changed 2025-05-25 03:01:02.035638 | 2025-05-25 03:01:02.035785 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-25 03:01:02.299778 | orchestrator | ok 2025-05-25 03:01:02.310231 | 2025-05-25 03:01:02.310385 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-25 03:01:02.771947 | orchestrator | ok 2025-05-25 03:01:02.780119 | 2025-05-25 03:01:02.780304 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-25 03:01:03.329741 | orchestrator | ok 2025-05-25 03:01:03.337579 | 2025-05-25 03:01:03.337703 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-25 03:01:03.373595 | orchestrator | skipping: Conditional result was False 2025-05-25 03:01:03.393379 | 2025-05-25 03:01:03.393643 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-25 03:01:04.005094 | orchestrator -> localhost | changed 2025-05-25 03:01:04.023300 | 2025-05-25 03:01:04.023442 | TASK [add-build-sshkey : Add back temp key] 2025-05-25 03:01:04.511565 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/75d22ebf6c3e48d2a89c8d4ea630ef96/work/75d22ebf6c3e48d2a89c8d4ea630ef96_id_rsa (zuul-build-sshkey) 2025-05-25 03:01:04.512920 | orchestrator -> localhost | ok: Runtime: 0:00:00.031222 2025-05-25 03:01:04.527283 | 2025-05-25 03:01:04.527437 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-25 03:01:05.102456 | orchestrator | ok 2025-05-25 03:01:05.117627 | 2025-05-25 03:01:05.117773 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-25 03:01:05.159535 | orchestrator | skipping: Conditional result was False 2025-05-25 03:01:05.292307 | 2025-05-25 03:01:05.292456 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-25 03:01:05.808260 | orchestrator | ok 2025-05-25 03:01:05.823377 | 2025-05-25 03:01:05.823541 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-25 03:01:05.854786 | orchestrator | ok 2025-05-25 03:01:05.862858 | 2025-05-25 03:01:05.862992 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-25 03:01:06.235528 | orchestrator -> localhost | ok 2025-05-25 03:01:06.244552 | 2025-05-25 03:01:06.244686 | TASK [validate-host : Collect information about the host] 2025-05-25 03:01:07.652179 | orchestrator | ok 2025-05-25 03:01:07.674469 | 2025-05-25 03:01:07.674639 | TASK [validate-host : Sanitize hostname] 2025-05-25 03:01:07.737889 | orchestrator | ok 2025-05-25 03:01:07.746173 | 2025-05-25 03:01:07.746317 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-25 03:01:08.513938 | orchestrator -> localhost | changed 2025-05-25 03:01:08.522548 | 2025-05-25 03:01:08.522684 | TASK [validate-host : Collect information about zuul worker] 2025-05-25 03:01:09.048377 | orchestrator | ok 2025-05-25 03:01:09.058181 | 2025-05-25 03:01:09.058386 | TASK [validate-host : Write out all zuul information for each host] 2025-05-25 03:01:09.767917 | orchestrator -> localhost | changed 2025-05-25 03:01:09.785529 | 2025-05-25 03:01:09.785684 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-25 03:01:10.097510 | orchestrator | ok 2025-05-25 03:01:10.109555 | 2025-05-25 03:01:10.109727 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-25 03:01:29.044458 | orchestrator | changed: 2025-05-25 03:01:29.044661 | orchestrator | .d..t...... src/ 2025-05-25 03:01:29.044691 | orchestrator | .d..t...... src/github.com/ 2025-05-25 03:01:29.044712 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-25 03:01:29.044731 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-25 03:01:29.044748 | orchestrator | RedHat.yml 2025-05-25 03:01:29.094371 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-25 03:01:29.094386 | orchestrator | RedHat.yml 2025-05-25 03:01:29.094430 | orchestrator | = 1.53.0"... 2025-05-25 03:01:45.391395 | orchestrator | 03:01:45.391 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-25 03:01:46.993936 | orchestrator | 03:01:46.993 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-25 03:01:48.105351 | orchestrator | 03:01:48.104 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-25 03:01:49.419424 | orchestrator | 03:01:49.419 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-05-25 03:01:50.798426 | orchestrator | 03:01:50.798 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-05-25 03:01:52.025146 | orchestrator | 03:01:52.024 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-05-25 03:01:53.057933 | orchestrator | 03:01:53.057 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-05-25 03:01:53.058093 | orchestrator | 03:01:53.057 STDOUT terraform: Providers are signed by their developers. 2025-05-25 03:01:53.058126 | orchestrator | 03:01:53.057 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-25 03:01:53.058149 | orchestrator | 03:01:53.057 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-25 03:01:53.058177 | orchestrator | 03:01:53.058 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-25 03:01:53.058204 | orchestrator | 03:01:53.058 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-25 03:01:53.058226 | orchestrator | 03:01:53.058 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-25 03:01:53.058238 | orchestrator | 03:01:53.058 STDOUT terraform: you run "tofu init" in the future. 2025-05-25 03:01:53.058772 | orchestrator | 03:01:53.058 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-25 03:01:53.058826 | orchestrator | 03:01:53.058 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-25 03:01:53.058921 | orchestrator | 03:01:53.058 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-25 03:01:53.058937 | orchestrator | 03:01:53.058 STDOUT terraform: should now work. 2025-05-25 03:01:53.058996 | orchestrator | 03:01:53.058 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-25 03:01:53.059042 | orchestrator | 03:01:53.058 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-25 03:01:53.059073 | orchestrator | 03:01:53.059 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-25 03:01:53.262420 | orchestrator | 03:01:53.262 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-05-25 03:01:53.467104 | orchestrator | 03:01:53.466 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-25 03:01:53.467239 | orchestrator | 03:01:53.466 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-25 03:01:53.467257 | orchestrator | 03:01:53.467 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-25 03:01:53.467272 | orchestrator | 03:01:53.467 STDOUT terraform: for this configuration. 2025-05-25 03:01:53.697110 | orchestrator | 03:01:53.696 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-05-25 03:01:53.793187 | orchestrator | 03:01:53.793 STDOUT terraform: ci.auto.tfvars 2025-05-25 03:01:53.798119 | orchestrator | 03:01:53.797 STDOUT terraform: default_custom.tf 2025-05-25 03:01:54.012372 | orchestrator | 03:01:54.012 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-05-25 03:01:55.001444 | orchestrator | 03:01:55.001 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-25 03:01:55.542074 | orchestrator | 03:01:55.541 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-25 03:01:55.757826 | orchestrator | 03:01:55.757 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-25 03:01:55.757935 | orchestrator | 03:01:55.757 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-25 03:01:55.757946 | orchestrator | 03:01:55.757 STDOUT terraform:  + create 2025-05-25 03:01:55.757957 | orchestrator | 03:01:55.757 STDOUT terraform:  <= read (data resources) 2025-05-25 03:01:55.758009 | orchestrator | 03:01:55.757 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-25 03:01:55.758132 | orchestrator | 03:01:55.758 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-25 03:01:55.758184 | orchestrator | 03:01:55.758 STDOUT terraform:  # (config refers to values not yet known) 2025-05-25 03:01:55.758245 | orchestrator | 03:01:55.758 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-25 03:01:55.758304 | orchestrator | 03:01:55.758 STDOUT terraform:  + checksum = (known after apply) 2025-05-25 03:01:55.758360 | orchestrator | 03:01:55.758 STDOUT terraform:  + created_at = (known after apply) 2025-05-25 03:01:55.758420 | orchestrator | 03:01:55.758 STDOUT terraform:  + file = (known after apply) 2025-05-25 03:01:55.758478 | orchestrator | 03:01:55.758 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.758537 | orchestrator | 03:01:55.758 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.758590 | orchestrator | 03:01:55.758 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-25 03:01:55.758644 | orchestrator | 03:01:55.758 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-25 03:01:55.758684 | orchestrator | 03:01:55.758 STDOUT terraform:  + most_recent = true 2025-05-25 03:01:55.758740 | orchestrator | 03:01:55.758 STDOUT terraform:  + name = (known after apply) 2025-05-25 03:01:55.758794 | orchestrator | 03:01:55.758 STDOUT terraform:  + protected = (known after apply) 2025-05-25 03:01:55.758864 | orchestrator | 03:01:55.758 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.758911 | orchestrator | 03:01:55.758 STDOUT terraform:  + schema = (known after apply) 2025-05-25 03:01:55.758967 | orchestrator | 03:01:55.758 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-25 03:01:55.759020 | orchestrator | 03:01:55.758 STDOUT terraform:  + tags = (known after apply) 2025-05-25 03:01:55.759078 | orchestrator | 03:01:55.759 STDOUT terraform:  + updated_at = (known after apply) 2025-05-25 03:01:55.759102 | orchestrator | 03:01:55.759 STDOUT terraform:  } 2025-05-25 03:01:55.759190 | orchestrator | 03:01:55.759 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-25 03:01:55.759243 | orchestrator | 03:01:55.759 STDOUT terraform:  # (config refers to values not yet known) 2025-05-25 03:01:55.759311 | orchestrator | 03:01:55.759 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-25 03:01:55.759363 | orchestrator | 03:01:55.759 STDOUT terraform:  + checksum = (known after apply) 2025-05-25 03:01:55.759414 | orchestrator | 03:01:55.759 STDOUT terraform:  + created_at = (known after apply) 2025-05-25 03:01:55.759470 | orchestrator | 03:01:55.759 STDOUT terraform:  + file = (known after apply) 2025-05-25 03:01:55.759527 | orchestrator | 03:01:55.759 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.759579 | orchestrator | 03:01:55.759 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.759635 | orchestrator | 03:01:55.759 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-25 03:01:55.759687 | orchestrator | 03:01:55.759 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-25 03:01:55.759725 | orchestrator | 03:01:55.759 STDOUT terraform:  + most_recent = true 2025-05-25 03:01:55.759778 | orchestrator | 03:01:55.759 STDOUT terraform:  + name = (known after apply) 2025-05-25 03:01:55.759833 | orchestrator | 03:01:55.759 STDOUT terraform:  + protected = (known after apply) 2025-05-25 03:01:55.759926 | orchestrator | 03:01:55.759 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.759982 | orchestrator | 03:01:55.759 STDOUT terraform:  + schema = (known after apply) 2025-05-25 03:01:55.760040 | orchestrator | 03:01:55.759 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-25 03:01:55.760087 | orchestrator | 03:01:55.760 STDOUT terraform:  + tags = (known after apply) 2025-05-25 03:01:55.760140 | orchestrator | 03:01:55.760 STDOUT terraform:  + updated_at = (known after apply) 2025-05-25 03:01:55.760165 | orchestrator | 03:01:55.760 STDOUT terraform:  } 2025-05-25 03:01:55.760252 | orchestrator | 03:01:55.760 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-25 03:01:55.760304 | orchestrator | 03:01:55.760 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-25 03:01:55.760366 | orchestrator | 03:01:55.760 STDOUT terraform:  + content = (known after apply) 2025-05-25 03:01:55.760429 | orchestrator | 03:01:55.760 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-25 03:01:55.760490 | orchestrator | 03:01:55.760 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-25 03:01:55.760554 | orchestrator | 03:01:55.760 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-25 03:01:55.760613 | orchestrator | 03:01:55.760 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-25 03:01:55.760676 | orchestrator | 03:01:55.760 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-25 03:01:55.760735 | orchestrator | 03:01:55.760 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-25 03:01:55.760800 | orchestrator | 03:01:55.760 STDOUT terraform:  + directory_permission = "0777" 2025-05-25 03:01:55.760891 | orchestrator | 03:01:55.760 STDOUT terraform:  + file_permission = "0644" 2025-05-25 03:01:55.760924 | orchestrator | 03:01:55.760 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-25 03:01:55.760993 | orchestrator | 03:01:55.760 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.761018 | orchestrator | 03:01:55.760 STDOUT terraform:  } 2025-05-25 03:01:55.761069 | orchestrator | 03:01:55.761 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-25 03:01:55.761116 | orchestrator | 03:01:55.761 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-25 03:01:55.761175 | orchestrator | 03:01:55.761 STDOUT terraform:  + content = (known after apply) 2025-05-25 03:01:55.761236 | orchestrator | 03:01:55.761 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-25 03:01:55.761295 | orchestrator | 03:01:55.761 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-25 03:01:55.761356 | orchestrator | 03:01:55.761 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-25 03:01:55.761419 | orchestrator | 03:01:55.761 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-25 03:01:55.761482 | orchestrator | 03:01:55.761 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-25 03:01:55.761556 | orchestrator | 03:01:55.761 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-25 03:01:55.761599 | orchestrator | 03:01:55.761 STDOUT terraform:  + directory_permission = "0777" 2025-05-25 03:01:55.761642 | orchestrator | 03:01:55.761 STDOUT terraform:  + file_permission = "0644" 2025-05-25 03:01:55.761693 | orchestrator | 03:01:55.761 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-25 03:01:55.761755 | orchestrator | 03:01:55.761 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.761779 | orchestrator | 03:01:55.761 STDOUT terraform:  } 2025-05-25 03:01:55.761821 | orchestrator | 03:01:55.761 STDOUT terraform:  # local_file.inventory will be created 2025-05-25 03:01:55.761915 | orchestrator | 03:01:55.761 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-25 03:01:55.761988 | orchestrator | 03:01:55.761 STDOUT terraform:  + content = (known after apply) 2025-05-25 03:01:55.762076 | orchestrator | 03:01:55.761 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-25 03:01:55.762144 | orchestrator | 03:01:55.762 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-25 03:01:55.762199 | orchestrator | 03:01:55.762 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-25 03:01:55.763650 | orchestrator | 03:01:55.762 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-25 03:01:55.763818 | orchestrator | 03:01:55.762 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-25 03:01:55.763860 | orchestrator | 03:01:55.762 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-25 03:01:55.763873 | orchestrator | 03:01:55.762 STDOUT terraform:  + directory_permission = "0777" 2025-05-25 03:01:55.763884 | orchestrator | 03:01:55.762 STDOUT terraform:  + file_permission = "0644" 2025-05-25 03:01:55.763894 | orchestrator | 03:01:55.762 STDOUT terraform:  + filename = "inventory.ci" 2025-05-25 03:01:55.763904 | orchestrator | 03:01:55.762 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.763915 | orchestrator | 03:01:55.762 STDOUT terraform:  } 2025-05-25 03:01:55.763925 | orchestrator | 03:01:55.762 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-25 03:01:55.763934 | orchestrator | 03:01:55.762 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-25 03:01:55.763944 | orchestrator | 03:01:55.762 STDOUT terraform:  + content = (sensitive value) 2025-05-25 03:01:55.763954 | orchestrator | 03:01:55.762 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-25 03:01:55.763967 | orchestrator | 03:01:55.762 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-25 03:01:55.763978 | orchestrator | 03:01:55.762 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-25 03:01:55.763988 | orchestrator | 03:01:55.762 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-25 03:01:55.763998 | orchestrator | 03:01:55.762 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-25 03:01:55.764008 | orchestrator | 03:01:55.762 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-25 03:01:55.764018 | orchestrator | 03:01:55.762 STDOUT terraform:  + directory_permission = "0700" 2025-05-25 03:01:55.764028 | orchestrator | 03:01:55.762 STDOUT terraform:  + file_permission = "0600" 2025-05-25 03:01:55.764038 | orchestrator | 03:01:55.762 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-25 03:01:55.764048 | orchestrator | 03:01:55.763 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.764057 | orchestrator | 03:01:55.763 STDOUT terraform:  } 2025-05-25 03:01:55.764067 | orchestrator | 03:01:55.763 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-25 03:01:55.764077 | orchestrator | 03:01:55.763 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-25 03:01:55.764086 | orchestrator | 03:01:55.763 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.764096 | orchestrator | 03:01:55.763 STDOUT terraform:  } 2025-05-25 03:01:55.764106 | orchestrator | 03:01:55.763 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-25 03:01:55.764117 | orchestrator | 03:01:55.763 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-25 03:01:55.764127 | orchestrator | 03:01:55.763 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 03:01:55.764154 | orchestrator | 03:01:55.763 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.764164 | orchestrator | 03:01:55.763 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.764174 | orchestrator | 03:01:55.763 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 03:01:55.764184 | orchestrator | 03:01:55.763 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.764205 | orchestrator | 03:01:55.763 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-25 03:01:55.764244 | orchestrator | 03:01:55.763 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.764255 | orchestrator | 03:01:55.763 STDOUT terraform:  + size = 80 2025-05-25 03:01:55.764265 | orchestrator | 03:01:55.763 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 03:01:55.764274 | orchestrator | 03:01:55.763 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 03:01:55.764284 | orchestrator | 03:01:55.763 STDOUT terraform:  } 2025-05-25 03:01:55.764294 | orchestrator | 03:01:55.763 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-25 03:01:55.764304 | orchestrator | 03:01:55.763 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-25 03:01:55.764314 | orchestrator | 03:01:55.763 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 03:01:55.764323 | orchestrator | 03:01:55.763 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.764333 | orchestrator | 03:01:55.763 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.764343 | orchestrator | 03:01:55.764 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 03:01:55.764352 | orchestrator | 03:01:55.764 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.764362 | orchestrator | 03:01:55.764 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-25 03:01:55.764376 | orchestrator | 03:01:55.764 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.764386 | orchestrator | 03:01:55.764 STDOUT terraform:  + size = 80 2025-05-25 03:01:55.764396 | orchestrator | 03:01:55.764 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 03:01:55.764405 | orchestrator | 03:01:55.764 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 03:01:55.764415 | orchestrator | 03:01:55.764 STDOUT terraform:  } 2025-05-25 03:01:55.764428 | orchestrator | 03:01:55.764 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-25 03:01:55.764489 | orchestrator | 03:01:55.764 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-25 03:01:55.764534 | orchestrator | 03:01:55.764 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 03:01:55.764572 | orchestrator | 03:01:55.764 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.764638 | orchestrator | 03:01:55.764 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.764677 | orchestrator | 03:01:55.764 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 03:01:55.764731 | orchestrator | 03:01:55.764 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.764812 | orchestrator | 03:01:55.764 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-25 03:01:55.764867 | orchestrator | 03:01:55.764 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.764882 | orchestrator | 03:01:55.764 STDOUT terraform:  + size = 80 2025-05-25 03:01:55.764924 | orchestrator | 03:01:55.764 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 03:01:55.764960 | orchestrator | 03:01:55.764 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 03:01:55.764975 | orchestrator | 03:01:55.764 STDOUT terraform:  } 2025-05-25 03:01:55.765156 | orchestrator | 03:01:55.764 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-25 03:01:55.765230 | orchestrator | 03:01:55.765 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-25 03:01:55.765261 | orchestrator | 03:01:55.765 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 03:01:55.765278 | orchestrator | 03:01:55.765 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.765292 | orchestrator | 03:01:55.765 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.765312 | orchestrator | 03:01:55.765 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 03:01:55.765331 | orchestrator | 03:01:55.765 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.765407 | orchestrator | 03:01:55.765 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-25 03:01:55.765461 | orchestrator | 03:01:55.765 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.765480 | orchestrator | 03:01:55.765 STDOUT terraform:  + size = 80 2025-05-25 03:01:55.765521 | orchestrator | 03:01:55.765 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 03:01:55.765542 | orchestrator | 03:01:55.765 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 03:01:55.765560 | orchestrator | 03:01:55.765 STDOUT terraform:  } 2025-05-25 03:01:55.765638 | orchestrator | 03:01:55.765 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-25 03:01:55.765706 | orchestrator | 03:01:55.765 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-25 03:01:55.765758 | orchestrator | 03:01:55.765 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 03:01:55.765805 | orchestrator | 03:01:55.765 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.765827 | orchestrator | 03:01:55.765 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.765953 | orchestrator | 03:01:55.765 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 03:01:55.766001 | orchestrator | 03:01:55.765 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.766084 | orchestrator | 03:01:55.765 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-25 03:01:55.766140 | orchestrator | 03:01:55.766 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.766183 | orchestrator | 03:01:55.766 STDOUT terraform:  + size = 80 2025-05-25 03:01:55.766203 | orchestrator | 03:01:55.766 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 03:01:55.766222 | orchestrator | 03:01:55.766 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 03:01:55.766242 | orchestrator | 03:01:55.766 STDOUT terraform:  } 2025-05-25 03:01:55.766318 | orchestrator | 03:01:55.766 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-25 03:01:55.766388 | orchestrator | 03:01:55.766 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-25 03:01:55.766440 | orchestrator | 03:01:55.766 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 03:01:55.766461 | orchestrator | 03:01:55.766 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.766516 | orchestrator | 03:01:55.766 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.766564 | orchestrator | 03:01:55.766 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 03:01:55.766611 | orchestrator | 03:01:55.766 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.766670 | orchestrator | 03:01:55.766 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-25 03:01:55.766720 | orchestrator | 03:01:55.766 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.766741 | orchestrator | 03:01:55.766 STDOUT terraform:  + size = 80 2025-05-25 03:01:55.766790 | orchestrator | 03:01:55.766 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 03:01:55.766812 | orchestrator | 03:01:55.766 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 03:01:55.766828 | orchestrator | 03:01:55.766 STDOUT terraform:  } 2025-05-25 03:01:55.766890 | orchestrator | 03:01:55.766 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-25 03:01:55.766952 | orchestrator | 03:01:55.766 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-25 03:01:55.766999 | orchestrator | 03:01:55.766 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 03:01:55.767019 | orchestrator | 03:01:55.766 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.767077 | orchestrator | 03:01:55.767 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.767124 | orchestrator | 03:01:55.767 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 03:01:55.767170 | orchestrator | 03:01:55.767 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.767231 | orchestrator | 03:01:55.767 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-25 03:01:55.767277 | orchestrator | 03:01:55.767 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.767298 | orchestrator | 03:01:55.767 STDOUT terraform:  + size = 80 2025-05-25 03:01:55.767332 | orchestrator | 03:01:55.767 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 03:01:55.767352 | orchestrator | 03:01:55.767 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 03:01:55.767372 | orchestrator | 03:01:55.767 STDOUT terraform:  } 2025-05-25 03:01:55.767430 | orchestrator | 03:01:55.767 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-25 03:01:55.767487 | orchestrator | 03:01:55.767 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 03:01:55.767534 | orchestrator | 03:01:55.767 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 03:01:55.767554 | orchestrator | 03:01:55.767 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.767609 | orchestrator | 03:01:55.767 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.767657 | orchestrator | 03:01:55.767 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.767707 | orchestrator | 03:01:55.767 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-25 03:01:55.767754 | orchestrator | 03:01:55.767 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.767774 | orchestrator | 03:01:55.767 STDOUT terraform:  + size = 20 2025-05-25 03:01:55.767808 | orchestrator | 03:01:55.767 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 03:01:55.767879 | orchestrator | 03:01:55.767 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 03:01:55.767897 | orchestrator | 03:01:55.767 STDOUT terraform:  } 2025-05-25 03:01:55.767917 | orchestrator | 03:01:55.767 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-25 03:01:55.767990 | orchestrator | 03:01:55.767 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 03:01:55.768028 | orchestrator | 03:01:55.767 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 03:01:55.768048 | orchestrator | 03:01:55.768 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.768104 | orchestrator | 03:01:55.768 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.768150 | orchestrator | 03:01:55.768 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.768201 | orchestrator | 03:01:55.768 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-25 03:01:55.768248 | orchestrator | 03:01:55.768 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.768267 | orchestrator | 03:01:55.768 STDOUT terraform:  + size = 20 2025-05-25 03:01:55.768303 | orchestrator | 03:01:55.768 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 03:01:55.768328 | orchestrator | 03:01:55.768 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 03:01:55.768347 | orchestrator | 03:01:55.768 STDOUT terraform:  } 2025-05-25 03:01:55.768406 | orchestrator | 03:01:55.768 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-25 03:01:55.768462 | orchestrator | 03:01:55.768 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 03:01:55.768532 | orchestrator | 03:01:55.768 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 03:01:55.768549 | orchestrator | 03:01:55.768 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.768624 | orchestrator | 03:01:55.768 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.768658 | orchestrator | 03:01:55.768 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.768712 | orchestrator | 03:01:55.768 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-25 03:01:55.768759 | orchestrator | 03:01:55.768 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.768780 | orchestrator | 03:01:55.768 STDOUT terraform:  + size = 20 2025-05-25 03:01:55.768814 | orchestrator | 03:01:55.768 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 03:01:55.768908 | orchestrator | 03:01:55.768 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 03:01:55.768927 | orchestrator | 03:01:55.768 STDOUT terraform:  } 2025-05-25 03:01:55.768946 | orchestrator | 03:01:55.768 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-25 03:01:55.768982 | orchestrator | 03:01:55.768 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 03:01:55.769031 | orchestrator | 03:01:55.768 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 03:01:55.769065 | orchestrator | 03:01:55.769 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.769113 | orchestrator | 03:01:55.769 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.769163 | orchestrator | 03:01:55.769 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.769214 | orchestrator | 03:01:55.769 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-25 03:01:55.769261 | orchestrator | 03:01:55.769 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.769282 | orchestrator | 03:01:55.769 STDOUT terraform:  + size = 20 2025-05-25 03:01:55.769324 | orchestrator | 03:01:55.769 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 03:01:55.769344 | orchestrator | 03:01:55.769 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 03:01:55.769362 | orchestrator | 03:01:55.769 STDOUT terraform:  } 2025-05-25 03:01:55.769498 | orchestrator | 03:01:55.769 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-25 03:01:55.769554 | orchestrator | 03:01:55.769 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 03:01:55.769602 | orchestrator | 03:01:55.769 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 03:01:55.769635 | orchestrator | 03:01:55.769 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.769682 | orchestrator | 03:01:55.769 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.769730 | orchestrator | 03:01:55.769 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.769782 | orchestrator | 03:01:55.769 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-25 03:01:55.769870 | orchestrator | 03:01:55.769 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.769900 | orchestrator | 03:01:55.769 STDOUT terraform:  + size = 20 2025-05-25 03:01:55.769936 | orchestrator | 03:01:55.769 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 03:01:55.769969 | orchestrator | 03:01:55.769 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 03:01:55.769990 | orchestrator | 03:01:55.769 STDOUT terraform:  } 2025-05-25 03:01:55.772552 | orchestrator | 03:01:55.769 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-25 03:01:55.772633 | orchestrator | 03:01:55.772 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 03:01:55.772678 | orchestrator | 03:01:55.772 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 03:01:55.772710 | orchestrator | 03:01:55.772 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.772758 | orchestrator | 03:01:55.772 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.772803 | orchestrator | 03:01:55.772 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.772874 | orchestrator | 03:01:55.772 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-25 03:01:55.772921 | orchestrator | 03:01:55.772 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.772951 | orchestrator | 03:01:55.772 STDOUT terraform:  + size = 20 2025-05-25 03:01:55.772983 | orchestrator | 03:01:55.772 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 03:01:55.773014 | orchestrator | 03:01:55.772 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 03:01:55.773024 | orchestrator | 03:01:55.773 STDOUT terraform:  } 2025-05-25 03:01:55.773085 | orchestrator | 03:01:55.773 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-25 03:01:55.773140 | orchestrator | 03:01:55.773 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 03:01:55.773186 | orchestrator | 03:01:55.773 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 03:01:55.773218 | orchestrator | 03:01:55.773 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.773265 | orchestrator | 03:01:55.773 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.773312 | orchestrator | 03:01:55.773 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.773361 | orchestrator | 03:01:55.773 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-25 03:01:55.773408 | orchestrator | 03:01:55.773 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.773437 | orchestrator | 03:01:55.773 STDOUT terraform:  + size = 20 2025-05-25 03:01:55.773477 | orchestrator | 03:01:55.773 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 03:01:55.773498 | orchestrator | 03:01:55.773 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 03:01:55.773507 | orchestrator | 03:01:55.773 STDOUT terraform:  } 2025-05-25 03:01:55.773566 | orchestrator | 03:01:55.773 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-25 03:01:55.773621 | orchestrator | 03:01:55.773 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 03:01:55.773665 | orchestrator | 03:01:55.773 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 03:01:55.773695 | orchestrator | 03:01:55.773 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.773742 | orchestrator | 03:01:55.773 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.773805 | orchestrator | 03:01:55.773 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.773851 | orchestrator | 03:01:55.773 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-25 03:01:55.773893 | orchestrator | 03:01:55.773 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.773919 | orchestrator | 03:01:55.773 STDOUT terraform:  + size = 20 2025-05-25 03:01:55.773950 | orchestrator | 03:01:55.773 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 03:01:55.773982 | orchestrator | 03:01:55.773 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 03:01:55.774002 | orchestrator | 03:01:55.773 STDOUT terraform:  } 2025-05-25 03:01:55.774081 | orchestrator | 03:01:55.773 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-25 03:01:55.774135 | orchestrator | 03:01:55.774 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 03:01:55.774180 | orchestrator | 03:01:55.774 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 03:01:55.774212 | orchestrator | 03:01:55.774 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.774257 | orchestrator | 03:01:55.774 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.774303 | orchestrator | 03:01:55.774 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 03:01:55.774354 | orchestrator | 03:01:55.774 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-25 03:01:55.774400 | orchestrator | 03:01:55.774 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.774427 | orchestrator | 03:01:55.774 STDOUT terraform:  + size = 20 2025-05-25 03:01:55.774458 | orchestrator | 03:01:55.774 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 03:01:55.774490 | orchestrator | 03:01:55.774 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 03:01:55.774500 | orchestrator | 03:01:55.774 STDOUT terraform:  } 2025-05-25 03:01:55.774556 | orchestrator | 03:01:55.774 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-25 03:01:55.774611 | orchestrator | 03:01:55.774 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-25 03:01:55.774655 | orchestrator | 03:01:55.774 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-25 03:01:55.774700 | orchestrator | 03:01:55.774 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-25 03:01:55.774744 | orchestrator | 03:01:55.774 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-25 03:01:55.774789 | orchestrator | 03:01:55.774 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.774820 | orchestrator | 03:01:55.774 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.774863 | orchestrator | 03:01:55.774 STDOUT terraform:  + config_drive = true 2025-05-25 03:01:55.774908 | orchestrator | 03:01:55.774 STDOUT terraform:  + created = (known after apply) 2025-05-25 03:01:55.774950 | orchestrator | 03:01:55.774 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-25 03:01:55.774988 | orchestrator | 03:01:55.774 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-25 03:01:55.775019 | orchestrator | 03:01:55.774 STDOUT terraform:  + force_delete = false 2025-05-25 03:01:55.775062 | orchestrator | 03:01:55.775 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-25 03:01:55.775107 | orchestrator | 03:01:55.775 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.775158 | orchestrator | 03:01:55.775 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 03:01:55.775197 | orchestrator | 03:01:55.775 STDOUT terraform:  + image_name = (known after apply) 2025-05-25 03:01:55.775228 | orchestrator | 03:01:55.775 STDOUT terraform:  + key_pair = "testbed" 2025-05-25 03:01:55.775268 | orchestrator | 03:01:55.775 STDOUT terraform:  + name = "testbed-manager" 2025-05-25 03:01:55.775299 | orchestrator | 03:01:55.775 STDOUT terraform:  + power_state = "active" 2025-05-25 03:01:55.775344 | orchestrator | 03:01:55.775 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.775386 | orchestrator | 03:01:55.775 STDOUT terraform:  + security_groups = (known after apply) 2025-05-25 03:01:55.775417 | orchestrator | 03:01:55.775 STDOUT terraform:  + stop_before_destroy = false 2025-05-25 03:01:55.775461 | orchestrator | 03:01:55.775 STDOUT terraform:  + updated = (known after apply) 2025-05-25 03:01:55.775513 | orchestrator | 03:01:55.775 STDOUT terraform:  + user_data = (known after apply) 2025-05-25 03:01:55.775525 | orchestrator | 03:01:55.775 STDOUT terraform:  + block_device { 2025-05-25 03:01:55.775558 | orchestrator | 03:01:55.775 STDOUT terraform:  + boot_index = 0 2025-05-25 03:01:55.775595 | orchestrator | 03:01:55.775 STDOUT terraform:  + delete_on_termination = false 2025-05-25 03:01:55.775639 | orchestrator | 03:01:55.775 STDOUT terraform:  + destination_type = "volume" 2025-05-25 03:01:55.775668 | orchestrator | 03:01:55.775 STDOUT terraform:  + multiattach = false 2025-05-25 03:01:55.775708 | orchestrator | 03:01:55.775 STDOUT terraform:  + source_type = "volume" 2025-05-25 03:01:55.775761 | orchestrator | 03:01:55.775 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 03:01:55.775772 | orchestrator | 03:01:55.775 STDOUT terraform:  } 2025-05-25 03:01:55.775781 | orchestrator | 03:01:55.775 STDOUT terraform:  + network { 2025-05-25 03:01:55.775808 | orchestrator | 03:01:55.775 STDOUT terraform:  + access_network = false 2025-05-25 03:01:55.775895 | orchestrator | 03:01:55.775 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-25 03:01:55.775923 | orchestrator | 03:01:55.775 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-25 03:01:55.775969 | orchestrator | 03:01:55.775 STDOUT terraform:  + mac = (known after apply) 2025-05-25 03:01:55.776005 | orchestrator | 03:01:55.775 STDOUT terraform:  + name = (known after apply) 2025-05-25 03:01:55.776045 | orchestrator | 03:01:55.775 STDOUT terraform:  + port = (known after apply) 2025-05-25 03:01:55.776085 | orchestrator | 03:01:55.776 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 03:01:55.776105 | orchestrator | 03:01:55.776 STDOUT terraform:  } 2025-05-25 03:01:55.776115 | orchestrator | 03:01:55.776 STDOUT terraform:  } 2025-05-25 03:01:55.776169 | orchestrator | 03:01:55.776 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-25 03:01:55.776216 | orchestrator | 03:01:55.776 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-25 03:01:55.776258 | orchestrator | 03:01:55.776 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-25 03:01:55.776301 | orchestrator | 03:01:55.776 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-25 03:01:55.776343 | orchestrator | 03:01:55.776 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-25 03:01:55.776388 | orchestrator | 03:01:55.776 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.776417 | orchestrator | 03:01:55.776 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.776442 | orchestrator | 03:01:55.776 STDOUT terraform:  + config_drive = true 2025-05-25 03:01:55.776482 | orchestrator | 03:01:55.776 STDOUT terraform:  + created = (known after apply) 2025-05-25 03:01:55.776525 | orchestrator | 03:01:55.776 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-25 03:01:55.776560 | orchestrator | 03:01:55.776 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-25 03:01:55.776589 | orchestrator | 03:01:55.776 STDOUT terraform:  + force_delete = false 2025-05-25 03:01:55.776630 | orchestrator | 03:01:55.776 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-25 03:01:55.776673 | orchestrator | 03:01:55.776 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.776716 | orchestrator | 03:01:55.776 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 03:01:55.776758 | orchestrator | 03:01:55.776 STDOUT terraform:  + image_name = (known after apply) 2025-05-25 03:01:55.776787 | orchestrator | 03:01:55.776 STDOUT terraform:  + key_pair = "testbed" 2025-05-25 03:01:55.776825 | orchestrator | 03:01:55.776 STDOUT terraform:  + name = "testbed-node-0" 2025-05-25 03:01:55.776867 | orchestrator | 03:01:55.776 STDOUT terraform:  + power_state = "active" 2025-05-25 03:01:55.776916 | orchestrator | 03:01:55.776 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.776949 | orchestrator | 03:01:55.776 STDOUT terraform:  + security_groups = (known after apply) 2025-05-25 03:01:55.776978 | orchestrator | 03:01:55.776 STDOUT terraform:  + stop_before_destroy = false 2025-05-25 03:01:55.777021 | orchestrator | 03:01:55.776 STDOUT terraform:  + updated = (known after apply) 2025-05-25 03:01:55.777079 | orchestrator | 03:01:55.777 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-25 03:01:55.777099 | orchestrator | 03:01:55.777 STDOUT terraform:  + block_device { 2025-05-25 03:01:55.777128 | orchestrator | 03:01:55.777 STDOUT terraform:  + boot_index = 0 2025-05-25 03:01:55.777162 | orchestrator | 03:01:55.777 STDOUT terraform:  + delete_on_termination = false 2025-05-25 03:01:55.777198 | orchestrator | 03:01:55.777 STDOUT terraform:  + destination_type = "volume" 2025-05-25 03:01:55.777233 | orchestrator | 03:01:55.777 STDOUT terraform:  + multiattach = false 2025-05-25 03:01:55.777269 | orchestrator | 03:01:55.777 STDOUT terraform:  + source_type = "volume" 2025-05-25 03:01:55.777314 | orchestrator | 03:01:55.777 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 03:01:55.777331 | orchestrator | 03:01:55.777 STDOUT terraform:  } 2025-05-25 03:01:55.777350 | orchestrator | 03:01:55.777 STDOUT terraform:  + network { 2025-05-25 03:01:55.777374 | orchestrator | 03:01:55.777 STDOUT terraform:  + access_network = false 2025-05-25 03:01:55.777411 | orchestrator | 03:01:55.777 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-25 03:01:55.777448 | orchestrator | 03:01:55.777 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-25 03:01:55.777487 | orchestrator | 03:01:55.777 STDOUT terraform:  + mac = (known after apply) 2025-05-25 03:01:55.777525 | orchestrator | 03:01:55.777 STDOUT terraform:  + name = (known after apply) 2025-05-25 03:01:55.777562 | orchestrator | 03:01:55.777 STDOUT terraform:  + port = (known after apply) 2025-05-25 03:01:55.777600 | orchestrator | 03:01:55.777 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 03:01:55.777609 | orchestrator | 03:01:55.777 STDOUT terraform:  } 2025-05-25 03:01:55.777630 | orchestrator | 03:01:55.777 STDOUT terraform:  } 2025-05-25 03:01:55.777682 | orchestrator | 03:01:55.777 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-25 03:01:55.777732 | orchestrator | 03:01:55.777 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-25 03:01:55.777774 | orchestrator | 03:01:55.777 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-25 03:01:55.777818 | orchestrator | 03:01:55.777 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-25 03:01:55.777872 | orchestrator | 03:01:55.777 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-25 03:01:55.777912 | orchestrator | 03:01:55.777 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.777941 | orchestrator | 03:01:55.777 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.777966 | orchestrator | 03:01:55.777 STDOUT terraform:  + config_drive = true 2025-05-25 03:01:55.778009 | orchestrator | 03:01:55.777 STDOUT terraform:  + created = (known after apply) 2025-05-25 03:01:55.778073 | orchestrator | 03:01:55.778 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-25 03:01:55.778109 | orchestrator | 03:01:55.778 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-25 03:01:55.778138 | orchestrator | 03:01:55.778 STDOUT terraform:  + force_delete = false 2025-05-25 03:01:55.778179 | orchestrator | 03:01:55.778 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-25 03:01:55.778232 | orchestrator | 03:01:55.778 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.778273 | orchestrator | 03:01:55.778 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 03:01:55.778314 | orchestrator | 03:01:55.778 STDOUT terraform:  + image_name = (known after apply) 2025-05-25 03:01:55.778330 | orchestrator | 03:01:55.778 STDOUT terraform:  + key_pair = "testbed" 2025-05-25 03:01:55.778377 | orchestrator | 03:01:55.778 STDOUT terraform:  + name = "testbed-node-1" 2025-05-25 03:01:55.778401 | orchestrator | 03:01:55.778 STDOUT terraform:  + power_state = "active" 2025-05-25 03:01:55.778444 | orchestrator | 03:01:55.778 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.778487 | orchestrator | 03:01:55.778 STDOUT terraform:  + security_groups = (known after apply) 2025-05-25 03:01:55.778510 | orchestrator | 03:01:55.778 STDOUT terraform:  + stop_before_destroy = false 2025-05-25 03:01:55.778554 | orchestrator | 03:01:55.778 STDOUT terraform:  + updated = (known after apply) 2025-05-25 03:01:55.778614 | orchestrator | 03:01:55.778 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-25 03:01:55.778623 | orchestrator | 03:01:55.778 STDOUT terraform:  + block_device { 2025-05-25 03:01:55.778658 | orchestrator | 03:01:55.778 STDOUT terraform:  + boot_index = 0 2025-05-25 03:01:55.778693 | orchestrator | 03:01:55.778 STDOUT terraform:  + delete_on_termination = false 2025-05-25 03:01:55.778729 | orchestrator | 03:01:55.778 STDOUT terraform:  + destination_type = "volume" 2025-05-25 03:01:55.778764 | orchestrator | 03:01:55.778 STDOUT terraform:  + multiattach = false 2025-05-25 03:01:55.778804 | orchestrator | 03:01:55.778 STDOUT terraform:  + source_type = "volume" 2025-05-25 03:01:55.778865 | orchestrator | 03:01:55.778 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 03:01:55.778882 | orchestrator | 03:01:55.778 STDOUT terraform:  } 2025-05-25 03:01:55.778895 | orchestrator | 03:01:55.778 STDOUT terraform:  + network { 2025-05-25 03:01:55.778909 | orchestrator | 03:01:55.778 STDOUT terraform:  + access_network = false 2025-05-25 03:01:55.778953 | orchestrator | 03:01:55.778 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-25 03:01:55.778985 | orchestrator | 03:01:55.778 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-25 03:01:55.779019 | orchestrator | 03:01:55.778 STDOUT terraform:  + mac = (known after apply) 2025-05-25 03:01:55.779053 | orchestrator | 03:01:55.779 STDOUT terraform:  + name = (known after apply) 2025-05-25 03:01:55.779088 | orchestrator | 03:01:55.779 STDOUT terraform:  + port = (known after apply) 2025-05-25 03:01:55.779122 | orchestrator | 03:01:55.779 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 03:01:55.779131 | orchestrator | 03:01:55.779 STDOUT terraform:  } 2025-05-25 03:01:55.779139 | orchestrator | 03:01:55.779 STDOUT terraform:  } 2025-05-25 03:01:55.779203 | orchestrator | 03:01:55.779 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-25 03:01:55.779249 | orchestrator | 03:01:55.779 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-25 03:01:55.779293 | orchestrator | 03:01:55.779 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-25 03:01:55.779323 | orchestrator | 03:01:55.779 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-25 03:01:55.779354 | orchestrator | 03:01:55.779 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-25 03:01:55.779395 | orchestrator | 03:01:55.779 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.779418 | orchestrator | 03:01:55.779 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.779427 | orchestrator | 03:01:55.779 STDOUT terraform:  + config_drive = true 2025-05-25 03:01:55.779472 | orchestrator | 03:01:55.779 STDOUT terraform:  + created = (known after apply) 2025-05-25 03:01:55.779511 | orchestrator | 03:01:55.779 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-25 03:01:55.779542 | orchestrator | 03:01:55.779 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-25 03:01:55.779566 | orchestrator | 03:01:55.779 STDOUT terraform:  + force_delete = false 2025-05-25 03:01:55.779602 | orchestrator | 03:01:55.779 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-25 03:01:55.779641 | orchestrator | 03:01:55.779 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.779680 | orchestrator | 03:01:55.779 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 03:01:55.779717 | orchestrator | 03:01:55.779 STDOUT terraform:  + image_name = (known after apply) 2025-05-25 03:01:55.779740 | orchestrator | 03:01:55.779 STDOUT terraform:  + key_pair = "testbed" 2025-05-25 03:01:55.779774 | orchestrator | 03:01:55.779 STDOUT terraform:  + name = "testbed-node-2" 2025-05-25 03:01:55.779797 | orchestrator | 03:01:55.779 STDOUT terraform:  + power_state = "active" 2025-05-25 03:01:55.779876 | orchestrator | 03:01:55.779 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.779888 | orchestrator | 03:01:55.779 STDOUT terraform:  + security_groups = (known after apply) 2025-05-25 03:01:55.779896 | orchestrator | 03:01:55.779 STDOUT terraform:  + stop_before_destroy = false 2025-05-25 03:01:55.779940 | orchestrator | 03:01:55.779 STDOUT terraform:  + updated = (known after apply) 2025-05-25 03:01:55.779994 | orchestrator | 03:01:55.779 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-25 03:01:55.780003 | orchestrator | 03:01:55.779 STDOUT terraform:  + block_device { 2025-05-25 03:01:55.780036 | orchestrator | 03:01:55.779 STDOUT terraform:  + boot_index = 0 2025-05-25 03:01:55.780067 | orchestrator | 03:01:55.780 STDOUT terraform:  + delete_on_termination = false 2025-05-25 03:01:55.780100 | orchestrator | 03:01:55.780 STDOUT terraform:  + destination_type = "volume" 2025-05-25 03:01:55.780132 | orchestrator | 03:01:55.780 STDOUT terraform:  + multiattach = false 2025-05-25 03:01:55.780164 | orchestrator | 03:01:55.780 STDOUT terraform:  + source_type = "volume" 2025-05-25 03:01:55.780206 | orchestrator | 03:01:55.780 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 03:01:55.780214 | orchestrator | 03:01:55.780 STDOUT terraform:  } 2025-05-25 03:01:55.780221 | orchestrator | 03:01:55.780 STDOUT terraform:  + network { 2025-05-25 03:01:55.780252 | orchestrator | 03:01:55.780 STDOUT terraform:  + access_network = false 2025-05-25 03:01:55.780286 | orchestrator | 03:01:55.780 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-25 03:01:55.780321 | orchestrator | 03:01:55.780 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-25 03:01:55.780355 | orchestrator | 03:01:55.780 STDOUT terraform:  + mac = (known after apply) 2025-05-25 03:01:55.780388 | orchestrator | 03:01:55.780 STDOUT terraform:  + name = (known after apply) 2025-05-25 03:01:55.780422 | orchestrator | 03:01:55.780 STDOUT terraform:  + port = (known after apply) 2025-05-25 03:01:55.780456 | orchestrator | 03:01:55.780 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 03:01:55.780464 | orchestrator | 03:01:55.780 STDOUT terraform:  } 2025-05-25 03:01:55.780471 | orchestrator | 03:01:55.780 STDOUT terraform:  } 2025-05-25 03:01:55.780524 | orchestrator | 03:01:55.780 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-25 03:01:55.780570 | orchestrator | 03:01:55.780 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-25 03:01:55.780606 | orchestrator | 03:01:55.780 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-25 03:01:55.780644 | orchestrator | 03:01:55.780 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-25 03:01:55.780685 | orchestrator | 03:01:55.780 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-25 03:01:55.780719 | orchestrator | 03:01:55.780 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.780740 | orchestrator | 03:01:55.780 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.780761 | orchestrator | 03:01:55.780 STDOUT terraform:  + config_drive = true 2025-05-25 03:01:55.780800 | orchestrator | 03:01:55.780 STDOUT terraform:  + created = (known after apply) 2025-05-25 03:01:55.780850 | orchestrator | 03:01:55.780 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-25 03:01:55.780881 | orchestrator | 03:01:55.780 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-25 03:01:55.780909 | orchestrator | 03:01:55.780 STDOUT terraform:  + force_delete = false 2025-05-25 03:01:55.780946 | orchestrator | 03:01:55.780 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-25 03:01:55.780984 | orchestrator | 03:01:55.780 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.781029 | orchestrator | 03:01:55.780 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 03:01:55.781067 | orchestrator | 03:01:55.781 STDOUT terraform:  + image_name = (known after apply) 2025-05-25 03:01:55.781089 | orchestrator | 03:01:55.781 STDOUT terraform:  + key_pair = "testbed" 2025-05-25 03:01:55.781124 | orchestrator | 03:01:55.781 STDOUT terraform:  + name = "testbed-node-3" 2025-05-25 03:01:55.781151 | orchestrator | 03:01:55.781 STDOUT terraform:  + power_state = "active" 2025-05-25 03:01:55.781190 | orchestrator | 03:01:55.781 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.781228 | orchestrator | 03:01:55.781 STDOUT terraform:  + security_groups = (known after apply) 2025-05-25 03:01:55.781249 | orchestrator | 03:01:55.781 STDOUT terraform:  + stop_before_destroy = false 2025-05-25 03:01:55.781292 | orchestrator | 03:01:55.781 STDOUT terraform:  + updated = (known after apply) 2025-05-25 03:01:55.781343 | orchestrator | 03:01:55.781 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-25 03:01:55.781351 | orchestrator | 03:01:55.781 STDOUT terraform:  + block_device { 2025-05-25 03:01:55.781384 | orchestrator | 03:01:55.781 STDOUT terraform:  + boot_index = 0 2025-05-25 03:01:55.781414 | orchestrator | 03:01:55.781 STDOUT terraform:  + delete_on_termination = false 2025-05-25 03:01:55.781446 | orchestrator | 03:01:55.781 STDOUT terraform:  + destination_type = "volume" 2025-05-25 03:01:55.781479 | orchestrator | 03:01:55.781 STDOUT terraform:  + multiattach = false 2025-05-25 03:01:55.781512 | orchestrator | 03:01:55.781 STDOUT terraform:  + source_type = "volume" 2025-05-25 03:01:55.781553 | orchestrator | 03:01:55.781 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 03:01:55.781561 | orchestrator | 03:01:55.781 STDOUT terraform:  } 2025-05-25 03:01:55.781568 | orchestrator | 03:01:55.781 STDOUT terraform:  + network { 2025-05-25 03:01:55.781597 | orchestrator | 03:01:55.781 STDOUT terraform:  + access_network = false 2025-05-25 03:01:55.781631 | orchestrator | 03:01:55.781 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-25 03:01:55.781664 | orchestrator | 03:01:55.781 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-25 03:01:55.781699 | orchestrator | 03:01:55.781 STDOUT terraform:  + mac = (known after apply) 2025-05-25 03:01:55.781733 | orchestrator | 03:01:55.781 STDOUT terraform:  + name = (known after apply) 2025-05-25 03:01:55.781767 | orchestrator | 03:01:55.781 STDOUT terraform:  + port = (known after apply) 2025-05-25 03:01:55.781802 | orchestrator | 03:01:55.781 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 03:01:55.781810 | orchestrator | 03:01:55.781 STDOUT terraform:  } 2025-05-25 03:01:55.781817 | orchestrator | 03:01:55.781 STDOUT terraform:  } 2025-05-25 03:01:55.781923 | orchestrator | 03:01:55.781 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-25 03:01:55.781933 | orchestrator | 03:01:55.781 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-25 03:01:55.781978 | orchestrator | 03:01:55.781 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-25 03:01:55.782030 | orchestrator | 03:01:55.781 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-25 03:01:55.782070 | orchestrator | 03:01:55.782 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-25 03:01:55.782109 | orchestrator | 03:01:55.782 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.782136 | orchestrator | 03:01:55.782 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.782160 | orchestrator | 03:01:55.782 STDOUT terraform:  + config_drive = true 2025-05-25 03:01:55.782199 | orchestrator | 03:01:55.782 STDOUT terraform:  + created = (known after apply) 2025-05-25 03:01:55.782239 | orchestrator | 03:01:55.782 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-25 03:01:55.782302 | orchestrator | 03:01:55.782 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-25 03:01:55.782330 | orchestrator | 03:01:55.782 STDOUT terraform:  + force_delete = false 2025-05-25 03:01:55.782370 | orchestrator | 03:01:55.782 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-25 03:01:55.782410 | orchestrator | 03:01:55.782 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.782446 | orchestrator | 03:01:55.782 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 03:01:55.782482 | orchestrator | 03:01:55.782 STDOUT terraform:  + image_name = (known after apply) 2025-05-25 03:01:55.782507 | orchestrator | 03:01:55.782 STDOUT terraform:  + key_pair = "testbed" 2025-05-25 03:01:55.782538 | orchestrator | 03:01:55.782 STDOUT terraform:  + name = "testbed-node-4" 2025-05-25 03:01:55.782563 | orchestrator | 03:01:55.782 STDOUT terraform:  + power_state = "active" 2025-05-25 03:01:55.782599 | orchestrator | 03:01:55.782 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.782635 | orchestrator | 03:01:55.782 STDOUT terraform:  + security_groups = (known after apply) 2025-05-25 03:01:55.782661 | orchestrator | 03:01:55.782 STDOUT terraform:  + stop_before_destroy = false 2025-05-25 03:01:55.782696 | orchestrator | 03:01:55.782 STDOUT terraform:  + updated = (known after apply) 2025-05-25 03:01:55.782745 | orchestrator | 03:01:55.782 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-25 03:01:55.782754 | orchestrator | 03:01:55.782 STDOUT terraform:  + block_device { 2025-05-25 03:01:55.782785 | orchestrator | 03:01:55.782 STDOUT terraform:  + boot_index = 0 2025-05-25 03:01:55.782813 | orchestrator | 03:01:55.782 STDOUT terraform:  + delete_on_termination = false 2025-05-25 03:01:55.782860 | orchestrator | 03:01:55.782 STDOUT terraform:  + destination_type = "volume" 2025-05-25 03:01:55.782883 | orchestrator | 03:01:55.782 STDOUT terraform:  + multiattach = false 2025-05-25 03:01:55.782909 | orchestrator | 03:01:55.782 STDOUT terraform:  + source_type = "volume" 2025-05-25 03:01:55.782947 | orchestrator | 03:01:55.782 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 03:01:55.782955 | orchestrator | 03:01:55.782 STDOUT terraform:  } 2025-05-25 03:01:55.782967 | orchestrator | 03:01:55.782 STDOUT terraform:  + network { 2025-05-25 03:01:55.782990 | orchestrator | 03:01:55.782 STDOUT terraform:  + access_network = false 2025-05-25 03:01:55.783021 | orchestrator | 03:01:55.782 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-25 03:01:55.783053 | orchestrator | 03:01:55.783 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-25 03:01:55.783084 | orchestrator | 03:01:55.783 STDOUT terraform:  + mac = (known after apply) 2025-05-25 03:01:55.783115 | orchestrator | 03:01:55.783 STDOUT terraform:  + name = (known after apply) 2025-05-25 03:01:55.783146 | orchestrator | 03:01:55.783 STDOUT terraform:  + port = (known after apply) 2025-05-25 03:01:55.783177 | orchestrator | 03:01:55.783 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 03:01:55.783191 | orchestrator | 03:01:55.783 STDOUT terraform:  } 2025-05-25 03:01:55.783197 | orchestrator | 03:01:55.783 STDOUT terraform:  } 2025-05-25 03:01:55.783241 | orchestrator | 03:01:55.783 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-25 03:01:55.783282 | orchestrator | 03:01:55.783 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-25 03:01:55.783319 | orchestrator | 03:01:55.783 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-25 03:01:55.783353 | orchestrator | 03:01:55.783 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-25 03:01:55.783388 | orchestrator | 03:01:55.783 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-25 03:01:55.783424 | orchestrator | 03:01:55.783 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.783450 | orchestrator | 03:01:55.783 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 03:01:55.783471 | orchestrator | 03:01:55.783 STDOUT terraform:  + config_drive = true 2025-05-25 03:01:55.783506 | orchestrator | 03:01:55.783 STDOUT terraform:  + created = (known after apply) 2025-05-25 03:01:55.783541 | orchestrator | 03:01:55.783 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-25 03:01:55.783572 | orchestrator | 03:01:55.783 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-25 03:01:55.783596 | orchestrator | 03:01:55.783 STDOUT terraform:  + force_delete = false 2025-05-25 03:01:55.783631 | orchestrator | 03:01:55.783 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-25 03:01:55.783667 | orchestrator | 03:01:55.783 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.783707 | orchestrator | 03:01:55.783 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 03:01:55.783739 | orchestrator | 03:01:55.783 STDOUT terraform:  + image_name = (known after apply) 2025-05-25 03:01:55.783764 | orchestrator | 03:01:55.783 STDOUT terraform:  + key_pair = "testbed" 2025-05-25 03:01:55.783795 | orchestrator | 03:01:55.783 STDOUT terraform:  + name = "testbed-node-5" 2025-05-25 03:01:55.783820 | orchestrator | 03:01:55.783 STDOUT terraform:  + power_state = "active" 2025-05-25 03:01:55.783869 | orchestrator | 03:01:55.783 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.783905 | orchestrator | 03:01:55.783 STDOUT terraform:  + security_groups = (known after apply) 2025-05-25 03:01:55.783926 | orchestrator | 03:01:55.783 STDOUT terraform:  + stop_before_destroy = false 2025-05-25 03:01:55.783962 | orchestrator | 03:01:55.783 STDOUT terraform:  + updated = (known after apply) 2025-05-25 03:01:55.784012 | orchestrator | 03:01:55.783 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-25 03:01:55.784020 | orchestrator | 03:01:55.784 STDOUT terraform:  + block_device { 2025-05-25 03:01:55.784050 | orchestrator | 03:01:55.784 STDOUT terraform:  + boot_index = 0 2025-05-25 03:01:55.784078 | orchestrator | 03:01:55.784 STDOUT terraform:  + delete_on_termination = false 2025-05-25 03:01:55.784107 | orchestrator | 03:01:55.784 STDOUT terraform:  + destination_type = "volume" 2025-05-25 03:01:55.784132 | orchestrator | 03:01:55.784 STDOUT terraform:  + multiattach = false 2025-05-25 03:01:55.784163 | orchestrator | 03:01:55.784 STDOUT terraform:  + source_type = "volume" 2025-05-25 03:01:55.784201 | orchestrator | 03:01:55.784 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 03:01:55.784209 | orchestrator | 03:01:55.784 STDOUT terraform:  } 2025-05-25 03:01:55.784216 | orchestrator | 03:01:55.784 STDOUT terraform:  + network { 2025-05-25 03:01:55.784243 | orchestrator | 03:01:55.784 STDOUT terraform:  + access_network = false 2025-05-25 03:01:55.784274 | orchestrator | 03:01:55.784 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-25 03:01:55.784304 | orchestrator | 03:01:55.784 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-25 03:01:55.784336 | orchestrator | 03:01:55.784 STDOUT terraform:  + mac = (known after apply) 2025-05-25 03:01:55.784368 | orchestrator | 03:01:55.784 STDOUT terraform:  + name = (known after apply) 2025-05-25 03:01:55.784400 | orchestrator | 03:01:55.784 STDOUT terraform:  + port = (known after apply) 2025-05-25 03:01:55.784431 | orchestrator | 03:01:55.784 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 03:01:55.784439 | orchestrator | 03:01:55.784 STDOUT terraform:  } 2025-05-25 03:01:55.784445 | orchestrator | 03:01:55.784 STDOUT terraform:  } 2025-05-25 03:01:55.784488 | orchestrator | 03:01:55.784 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-25 03:01:55.784524 | orchestrator | 03:01:55.784 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-25 03:01:55.784553 | orchestrator | 03:01:55.784 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-25 03:01:55.784583 | orchestrator | 03:01:55.784 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.784603 | orchestrator | 03:01:55.784 STDOUT terraform:  + name = "testbed" 2025-05-25 03:01:55.784622 | orchestrator | 03:01:55.784 STDOUT terraform:  + private_key = (sensitive value) 2025-05-25 03:01:55.784652 | orchestrator | 03:01:55.784 STDOUT terraform:  + public_key = (known after apply) 2025-05-25 03:01:55.784681 | orchestrator | 03:01:55.784 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.784711 | orchestrator | 03:01:55.784 STDOUT terraform:  + user_id = (known after apply) 2025-05-25 03:01:55.784718 | orchestrator | 03:01:55.784 STDOUT terraform:  } 2025-05-25 03:01:55.784770 | orchestrator | 03:01:55.784 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-25 03:01:55.784818 | orchestrator | 03:01:55.784 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 03:01:55.784871 | orchestrator | 03:01:55.784 STDOUT terraform:  + device = (known after apply) 2025-05-25 03:01:55.784893 | orchestrator | 03:01:55.784 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.784921 | orchestrator | 03:01:55.784 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 03:01:55.784949 | orchestrator | 03:01:55.784 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.784977 | orchestrator | 03:01:55.784 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 03:01:55.784989 | orchestrator | 03:01:55.784 STDOUT terraform:  } 2025-05-25 03:01:55.785038 | orchestrator | 03:01:55.784 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-25 03:01:55.785091 | orchestrator | 03:01:55.785 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 03:01:55.785112 | orchestrator | 03:01:55.785 STDOUT terraform:  + device = (known after apply) 2025-05-25 03:01:55.785141 | orchestrator | 03:01:55.785 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.785170 | orchestrator | 03:01:55.785 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 03:01:55.785198 | orchestrator | 03:01:55.785 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.785238 | orchestrator | 03:01:55.785 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 03:01:55.785245 | orchestrator | 03:01:55.785 STDOUT terraform:  } 2025-05-25 03:01:55.785285 | orchestrator | 03:01:55.785 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-25 03:01:55.785351 | orchestrator | 03:01:55.785 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 03:01:55.785388 | orchestrator | 03:01:55.785 STDOUT terraform:  + device = (known after apply) 2025-05-25 03:01:55.785419 | orchestrator | 03:01:55.785 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.785446 | orchestrator | 03:01:55.785 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 03:01:55.785476 | orchestrator | 03:01:55.785 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.785505 | orchestrator | 03:01:55.785 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 03:01:55.785512 | orchestrator | 03:01:55.785 STDOUT terraform:  } 2025-05-25 03:01:55.785565 | orchestrator | 03:01:55.785 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-25 03:01:55.785614 | orchestrator | 03:01:55.785 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 03:01:55.785644 | orchestrator | 03:01:55.785 STDOUT terraform:  + device = (known after apply) 2025-05-25 03:01:55.785678 | orchestrator | 03:01:55.785 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.785708 | orchestrator | 03:01:55.785 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 03:01:55.785727 | orchestrator | 03:01:55.785 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.785756 | orchestrator | 03:01:55.785 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 03:01:55.785763 | orchestrator | 03:01:55.785 STDOUT terraform:  } 2025-05-25 03:01:55.785815 | orchestrator | 03:01:55.785 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-25 03:01:55.785877 | orchestrator | 03:01:55.785 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 03:01:55.785905 | orchestrator | 03:01:55.785 STDOUT terraform:  + device = (known after apply) 2025-05-25 03:01:55.785940 | orchestrator | 03:01:55.785 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.785955 | orchestrator | 03:01:55.785 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 03:01:55.785989 | orchestrator | 03:01:55.785 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.786031 | orchestrator | 03:01:55.785 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 03:01:55.786039 | orchestrator | 03:01:55.786 STDOUT terraform:  } 2025-05-25 03:01:55.786090 | orchestrator | 03:01:55.786 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-25 03:01:55.786140 | orchestrator | 03:01:55.786 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 03:01:55.786168 | orchestrator | 03:01:55.786 STDOUT terraform:  + device = (known after apply) 2025-05-25 03:01:55.786197 | orchestrator | 03:01:55.786 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.786225 | orchestrator | 03:01:55.786 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 03:01:55.786256 | orchestrator | 03:01:55.786 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.786286 | orchestrator | 03:01:55.786 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 03:01:55.786294 | orchestrator | 03:01:55.786 STDOUT terraform:  } 2025-05-25 03:01:55.786347 | orchestrator | 03:01:55.786 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-25 03:01:55.786394 | orchestrator | 03:01:55.786 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 03:01:55.786423 | orchestrator | 03:01:55.786 STDOUT terraform:  + device = (known after apply) 2025-05-25 03:01:55.786452 | orchestrator | 03:01:55.786 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.786480 | orchestrator | 03:01:55.786 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 03:01:55.786509 | orchestrator | 03:01:55.786 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.786538 | orchestrator | 03:01:55.786 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 03:01:55.786545 | orchestrator | 03:01:55.786 STDOUT terraform:  } 2025-05-25 03:01:55.786599 | orchestrator | 03:01:55.786 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-25 03:01:55.786648 | orchestrator | 03:01:55.786 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 03:01:55.786677 | orchestrator | 03:01:55.786 STDOUT terraform:  + device = (known after apply) 2025-05-25 03:01:55.786709 | orchestrator | 03:01:55.786 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.786738 | orchestrator | 03:01:55.786 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 03:01:55.786767 | orchestrator | 03:01:55.786 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.786796 | orchestrator | 03:01:55.786 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 03:01:55.786803 | orchestrator | 03:01:55.786 STDOUT terraform:  } 2025-05-25 03:01:55.786872 | orchestrator | 03:01:55.786 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-25 03:01:55.786916 | orchestrator | 03:01:55.786 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 03:01:55.786945 | orchestrator | 03:01:55.786 STDOUT terraform:  + device = (known after apply) 2025-05-25 03:01:55.786974 | orchestrator | 03:01:55.786 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.787003 | orchestrator | 03:01:55.786 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 03:01:55.787032 | orchestrator | 03:01:55.786 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.787060 | orchestrator | 03:01:55.787 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 03:01:55.787067 | orchestrator | 03:01:55.787 STDOUT terraform:  } 2025-05-25 03:01:55.787126 | orchestrator | 03:01:55.787 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-25 03:01:55.787182 | orchestrator | 03:01:55.787 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-25 03:01:55.787213 | orchestrator | 03:01:55.787 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-25 03:01:55.787243 | orchestrator | 03:01:55.787 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-25 03:01:55.787272 | orchestrator | 03:01:55.787 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.787300 | orchestrator | 03:01:55.787 STDOUT terraform:  + port_id = (known after apply) 2025-05-25 03:01:55.787330 | orchestrator | 03:01:55.787 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.787337 | orchestrator | 03:01:55.787 STDOUT terraform:  } 2025-05-25 03:01:55.787388 | orchestrator | 03:01:55.787 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-25 03:01:55.787436 | orchestrator | 03:01:55.787 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-25 03:01:55.787462 | orchestrator | 03:01:55.787 STDOUT terraform:  + address = (known after apply) 2025-05-25 03:01:55.787486 | orchestrator | 03:01:55.787 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.787512 | orchestrator | 03:01:55.787 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-25 03:01:55.787538 | orchestrator | 03:01:55.787 STDOUT terraform:  + dns_name = (known after apply) 2025-05-25 03:01:55.787564 | orchestrator | 03:01:55.787 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-25 03:01:55.787590 | orchestrator | 03:01:55.787 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.787609 | orchestrator | 03:01:55.787 STDOUT terraform:  + pool = "public" 2025-05-25 03:01:55.787634 | orchestrator | 03:01:55.787 STDOUT terraform:  + port_id = (known after apply) 2025-05-25 03:01:55.787659 | orchestrator | 03:01:55.787 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.787689 | orchestrator | 03:01:55.787 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 03:01:55.787697 | orchestrator | 03:01:55.787 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.787715 | orchestrator | 03:01:55.787 STDOUT terraform:  } 2025-05-25 03:01:55.787759 | orchestrator | 03:01:55.787 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-25 03:01:55.787802 | orchestrator | 03:01:55.787 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-25 03:01:55.787850 | orchestrator | 03:01:55.787 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 03:01:55.787896 | orchestrator | 03:01:55.787 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.787921 | orchestrator | 03:01:55.787 STDOUT terraform:  + availability_zone_hints = [ 2025-05-25 03:01:55.787929 | orchestrator | 03:01:55.787 STDOUT terraform:  + "nova", 2025-05-25 03:01:55.787935 | orchestrator | 03:01:55.787 STDOUT terraform:  ] 2025-05-25 03:01:55.787980 | orchestrator | 03:01:55.787 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-25 03:01:55.788018 | orchestrator | 03:01:55.787 STDOUT terraform:  + external = (known after apply) 2025-05-25 03:01:55.788056 | orchestrator | 03:01:55.788 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.788094 | orchestrator | 03:01:55.788 STDOUT terraform:  + mtu = (known after apply) 2025-05-25 03:01:55.788133 | orchestrator | 03:01:55.788 STDOUT terraform:  + name = "net-testbed-management" 2025-05-25 03:01:55.788169 | orchestrator | 03:01:55.788 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-25 03:01:55.788206 | orchestrator | 03:01:55.788 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-25 03:01:55.788243 | orchestrator | 03:01:55.788 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.788282 | orchestrator | 03:01:55.788 STDOUT terraform:  + shared = (known after apply) 2025-05-25 03:01:55.788318 | orchestrator | 03:01:55.788 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.788355 | orchestrator | 03:01:55.788 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-25 03:01:55.788382 | orchestrator | 03:01:55.788 STDOUT terraform:  + segments (known after apply) 2025-05-25 03:01:55.788390 | orchestrator | 03:01:55.788 STDOUT terraform:  } 2025-05-25 03:01:55.788435 | orchestrator | 03:01:55.788 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-25 03:01:55.788482 | orchestrator | 03:01:55.788 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-25 03:01:55.788519 | orchestrator | 03:01:55.788 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 03:01:55.788555 | orchestrator | 03:01:55.788 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-25 03:01:55.788591 | orchestrator | 03:01:55.788 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-25 03:01:55.788629 | orchestrator | 03:01:55.788 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.788666 | orchestrator | 03:01:55.788 STDOUT terraform:  + device_id = (known after apply) 2025-05-25 03:01:55.788703 | orchestrator | 03:01:55.788 STDOUT terraform:  + device_owner = (known after apply) 2025-05-25 03:01:55.788740 | orchestrator | 03:01:55.788 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-25 03:01:55.788777 | orchestrator | 03:01:55.788 STDOUT terraform:  + dns_name = (known after apply) 2025-05-25 03:01:55.788813 | orchestrator | 03:01:55.788 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.789000 | orchestrator | 03:01:55.788 STDOUT terraform:  + mac_address = (known after apply) 2025-05-25 03:01:55.789067 | orchestrator | 03:01:55.788 STDOUT terraform:  + network_id = (known after apply) 2025-05-25 03:01:55.789079 | orchestrator | 03:01:55.788 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-25 03:01:55.789099 | orchestrator | 03:01:55.788 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-25 03:01:55.789111 | orchestrator | 03:01:55.788 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.789122 | orchestrator | 03:01:55.788 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-25 03:01:55.789133 | orchestrator | 03:01:55.789 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.789144 | orchestrator | 03:01:55.789 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.789160 | orchestrator | 03:01:55.789 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-25 03:01:55.789172 | orchestrator | 03:01:55.789 STDOUT terraform:  } 2025-05-25 03:01:55.789183 | orchestrator | 03:01:55.789 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.789194 | orchestrator | 03:01:55.789 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-25 03:01:55.789208 | orchestrator | 03:01:55.789 STDOUT terraform:  } 2025-05-25 03:01:55.789219 | orchestrator | 03:01:55.789 STDOUT terraform:  + binding (known after apply) 2025-05-25 03:01:55.789230 | orchestrator | 03:01:55.789 STDOUT terraform:  + fixed_ip { 2025-05-25 03:01:55.789245 | orchestrator | 03:01:55.789 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-25 03:01:55.789256 | orchestrator | 03:01:55.789 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 03:01:55.789270 | orchestrator | 03:01:55.789 STDOUT terraform:  } 2025-05-25 03:01:55.789281 | orchestrator | 03:01:55.789 STDOUT terraform:  } 2025-05-25 03:01:55.789329 | orchestrator | 03:01:55.789 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-25 03:01:55.789371 | orchestrator | 03:01:55.789 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-25 03:01:55.789424 | orchestrator | 03:01:55.789 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 03:01:55.789441 | orchestrator | 03:01:55.789 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-25 03:01:55.789478 | orchestrator | 03:01:55.789 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-25 03:01:55.789494 | orchestrator | 03:01:55.789 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.789547 | orchestrator | 03:01:55.789 STDOUT terraform:  + device_id = (known after apply) 2025-05-25 03:01:55.789564 | orchestrator | 03:01:55.789 STDOUT terraform:  + device_owner = (known after apply) 2025-05-25 03:01:55.789700 | orchestrator | 03:01:55.789 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-25 03:01:55.789730 | orchestrator | 03:01:55.789 STDOUT terraform:  + dns_name = (known after apply) 2025-05-25 03:01:55.789737 | orchestrator | 03:01:55.789 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.789745 | orchestrator | 03:01:55.789 STDOUT terraform:  + mac_address = (known after apply) 2025-05-25 03:01:55.789750 | orchestrator | 03:01:55.789 STDOUT terraform:  + network_id = (known after apply) 2025-05-25 03:01:55.789784 | orchestrator | 03:01:55.789 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-25 03:01:55.789820 | orchestrator | 03:01:55.789 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-25 03:01:55.789880 | orchestrator | 03:01:55.789 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.789914 | orchestrator | 03:01:55.789 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-25 03:01:55.789951 | orchestrator | 03:01:55.789 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.789969 | orchestrator | 03:01:55.789 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.789997 | orchestrator | 03:01:55.789 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-25 03:01:55.790005 | orchestrator | 03:01:55.789 STDOUT terraform:  } 2025-05-25 03:01:55.790044 | orchestrator | 03:01:55.790 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.790074 | orchestrator | 03:01:55.790 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-25 03:01:55.790081 | orchestrator | 03:01:55.790 STDOUT terraform:  } 2025-05-25 03:01:55.790106 | orchestrator | 03:01:55.790 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.790135 | orchestrator | 03:01:55.790 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-25 03:01:55.790141 | orchestrator | 03:01:55.790 STDOUT terraform:  } 2025-05-25 03:01:55.790166 | orchestrator | 03:01:55.790 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.790195 | orchestrator | 03:01:55.790 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-25 03:01:55.790212 | orchestrator | 03:01:55.790 STDOUT terraform:  } 2025-05-25 03:01:55.790237 | orchestrator | 03:01:55.790 STDOUT terraform:  + binding (known after apply) 2025-05-25 03:01:55.790248 | orchestrator | 03:01:55.790 STDOUT terraform:  + fixed_ip { 2025-05-25 03:01:55.790275 | orchestrator | 03:01:55.790 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-25 03:01:55.790305 | orchestrator | 03:01:55.790 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 03:01:55.790311 | orchestrator | 03:01:55.790 STDOUT terraform:  } 2025-05-25 03:01:55.790328 | orchestrator | 03:01:55.790 STDOUT terraform:  } 2025-05-25 03:01:55.790375 | orchestrator | 03:01:55.790 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-25 03:01:55.790419 | orchestrator | 03:01:55.790 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-25 03:01:55.790456 | orchestrator | 03:01:55.790 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 03:01:55.790492 | orchestrator | 03:01:55.790 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-25 03:01:55.790528 | orchestrator | 03:01:55.790 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-25 03:01:55.790574 | orchestrator | 03:01:55.790 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.790604 | orchestrator | 03:01:55.790 STDOUT terraform:  + device_id = (known after apply) 2025-05-25 03:01:55.790640 | orchestrator | 03:01:55.790 STDOUT terraform:  + device_owner = (known after apply) 2025-05-25 03:01:55.790678 | orchestrator | 03:01:55.790 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-25 03:01:55.790711 | orchestrator | 03:01:55.790 STDOUT terraform:  + dns_name = (known after apply) 2025-05-25 03:01:55.790748 | orchestrator | 03:01:55.790 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.790784 | orchestrator | 03:01:55.790 STDOUT terraform:  + mac_address = (known after apply) 2025-05-25 03:01:55.790820 | orchestrator | 03:01:55.790 STDOUT terraform:  + network_id = (known after apply) 2025-05-25 03:01:55.790952 | orchestrator | 03:01:55.790 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-25 03:01:55.790992 | orchestrator | 03:01:55.790 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-25 03:01:55.791006 | orchestrator | 03:01:55.790 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.791024 | orchestrator | 03:01:55.790 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-25 03:01:55.791036 | orchestrator | 03:01:55.790 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.791047 | orchestrator | 03:01:55.790 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.791063 | orchestrator | 03:01:55.791 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-25 03:01:55.791075 | orchestrator | 03:01:55.791 STDOUT terraform:  } 2025-05-25 03:01:55.791090 | orchestrator | 03:01:55.791 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.791101 | orchestrator | 03:01:55.791 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-25 03:01:55.791116 | orchestrator | 03:01:55.791 STDOUT terraform:  } 2025-05-25 03:01:55.791130 | orchestrator | 03:01:55.791 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.791145 | orchestrator | 03:01:55.791 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-25 03:01:55.791159 | orchestrator | 03:01:55.791 STDOUT terraform:  } 2025-05-25 03:01:55.791173 | orchestrator | 03:01:55.791 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.791202 | orchestrator | 03:01:55.791 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-25 03:01:55.791217 | orchestrator | 03:01:55.791 STDOUT terraform:  } 2025-05-25 03:01:55.791232 | orchestrator | 03:01:55.791 STDOUT terraform:  + binding (known after apply) 2025-05-25 03:01:55.791246 | orchestrator | 03:01:55.791 STDOUT terraform:  + fixed_ip { 2025-05-25 03:01:55.791261 | orchestrator | 03:01:55.791 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-25 03:01:55.791303 | orchestrator | 03:01:55.791 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 03:01:55.791320 | orchestrator | 03:01:55.791 STDOUT terraform:  } 2025-05-25 03:01:55.791348 | orchestrator | 03:01:55.791 STDOUT terraform:  } 2025-05-25 03:01:55.791364 | orchestrator | 03:01:55.791 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-25 03:01:55.791410 | orchestrator | 03:01:55.791 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-25 03:01:55.791451 | orchestrator | 03:01:55.791 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 03:01:55.791468 | orchestrator | 03:01:55.791 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-25 03:01:55.791513 | orchestrator | 03:01:55.791 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-25 03:01:55.791550 | orchestrator | 03:01:55.791 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.791589 | orchestrator | 03:01:55.791 STDOUT terraform:  + device_id = (known after apply) 2025-05-25 03:01:55.791628 | orchestrator | 03:01:55.791 STDOUT terraform:  + device_owner = (known after apply) 2025-05-25 03:01:55.791666 | orchestrator | 03:01:55.791 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-25 03:01:55.791704 | orchestrator | 03:01:55.791 STDOUT terraform:  + dns_name = (known after apply) 2025-05-25 03:01:55.791741 | orchestrator | 03:01:55.791 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.791781 | orchestrator | 03:01:55.791 STDOUT terraform:  + mac_address = (known after apply) 2025-05-25 03:01:55.791797 | orchestrator | 03:01:55.791 STDOUT terraform:  + network_id = (known after apply) 2025-05-25 03:01:55.791867 | orchestrator | 03:01:55.791 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-25 03:01:55.791886 | orchestrator | 03:01:55.791 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-25 03:01:55.791923 | orchestrator | 03:01:55.791 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.791962 | orchestrator | 03:01:55.791 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-25 03:01:55.791989 | orchestrator | 03:01:55.791 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.792004 | orchestrator | 03:01:55.791 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.792040 | orchestrator | 03:01:55.791 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-25 03:01:55.792053 | orchestrator | 03:01:55.792 STDOUT terraform:  } 2025-05-25 03:01:55.792067 | orchestrator | 03:01:55.792 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.792082 | orchestrator | 03:01:55.792 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-25 03:01:55.792096 | orchestrator | 03:01:55.792 STDOUT terraform:  } 2025-05-25 03:01:55.792111 | orchestrator | 03:01:55.792 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.792147 | orchestrator | 03:01:55.792 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-25 03:01:55.792163 | orchestrator | 03:01:55.792 STDOUT terraform:  } 2025-05-25 03:01:55.792174 | orchestrator | 03:01:55.792 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.792189 | orchestrator | 03:01:55.792 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-25 03:01:55.792211 | orchestrator | 03:01:55.792 STDOUT terraform:  } 2025-05-25 03:01:55.792225 | orchestrator | 03:01:55.792 STDOUT terraform:  + binding (known after apply) 2025-05-25 03:01:55.792239 | orchestrator | 03:01:55.792 STDOUT terraform:  + fixed_ip { 2025-05-25 03:01:55.792254 | orchestrator | 03:01:55.792 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-25 03:01:55.792280 | orchestrator | 03:01:55.792 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 03:01:55.792296 | orchestrator | 03:01:55.792 STDOUT terraform:  } 2025-05-25 03:01:55.792308 | orchestrator | 03:01:55.792 STDOUT terraform:  } 2025-05-25 03:01:55.794106 | orchestrator | 03:01:55.792 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-25 03:01:55.794151 | orchestrator | 03:01:55.792 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-25 03:01:55.794164 | orchestrator | 03:01:55.792 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 03:01:55.794176 | orchestrator | 03:01:55.792 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-25 03:01:55.794187 | orchestrator | 03:01:55.792 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-25 03:01:55.794198 | orchestrator | 03:01:55.792 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.794210 | orchestrator | 03:01:55.792 STDOUT terraform:  + device_id = (known after apply) 2025-05-25 03:01:55.794221 | orchestrator | 03:01:55.792 STDOUT terraform:  + device_owner = (known after apply) 2025-05-25 03:01:55.794232 | orchestrator | 03:01:55.792 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-25 03:01:55.794243 | orchestrator | 03:01:55.792 STDOUT terraform:  + dns_name = (known after apply) 2025-05-25 03:01:55.794254 | orchestrator | 03:01:55.792 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.794266 | orchestrator | 03:01:55.792 STDOUT terraform:  + mac_address = (known after apply) 2025-05-25 03:01:55.794277 | orchestrator | 03:01:55.792 STDOUT terraform:  + network_id = (known after apply) 2025-05-25 03:01:55.794288 | orchestrator | 03:01:55.792 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-25 03:01:55.794299 | orchestrator | 03:01:55.792 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-25 03:01:55.794310 | orchestrator | 03:01:55.792 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.794321 | orchestrator | 03:01:55.792 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-25 03:01:55.794332 | orchestrator | 03:01:55.792 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.794343 | orchestrator | 03:01:55.792 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.794354 | orchestrator | 03:01:55.792 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-25 03:01:55.794365 | orchestrator | 03:01:55.792 STDOUT terraform:  } 2025-05-25 03:01:55.794376 | orchestrator | 03:01:55.792 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.794387 | orchestrator | 03:01:55.793 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-25 03:01:55.794414 | orchestrator | 03:01:55.793 STDOUT terraform:  } 2025-05-25 03:01:55.794425 | orchestrator | 03:01:55.793 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.794436 | orchestrator | 03:01:55.793 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-25 03:01:55.794447 | orchestrator | 03:01:55.793 STDOUT terraform:  } 2025-05-25 03:01:55.794458 | orchestrator | 03:01:55.793 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.794468 | orchestrator | 03:01:55.793 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-25 03:01:55.794480 | orchestrator | 03:01:55.793 STDOUT terraform:  } 2025-05-25 03:01:55.794491 | orchestrator | 03:01:55.793 STDOUT terraform:  + binding (known after apply) 2025-05-25 03:01:55.794502 | orchestrator | 03:01:55.793 STDOUT terraform:  + fixed_ip { 2025-05-25 03:01:55.794513 | orchestrator | 03:01:55.793 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-25 03:01:55.794524 | orchestrator | 03:01:55.793 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 03:01:55.794535 | orchestrator | 03:01:55.793 STDOUT terraform:  } 2025-05-25 03:01:55.794546 | orchestrator | 03:01:55.793 STDOUT terraform:  } 2025-05-25 03:01:55.794557 | orchestrator | 03:01:55.793 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-25 03:01:55.794569 | orchestrator | 03:01:55.793 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-25 03:01:55.794591 | orchestrator | 03:01:55.793 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 03:01:55.794603 | orchestrator | 03:01:55.793 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-25 03:01:55.794614 | orchestrator | 03:01:55.793 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-25 03:01:55.794634 | orchestrator | 03:01:55.793 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.794646 | orchestrator | 03:01:55.793 STDOUT terraform:  + device_id = (known after apply) 2025-05-25 03:01:55.794657 | orchestrator | 03:01:55.793 STDOUT terraform:  + device_owner = (known after apply) 2025-05-25 03:01:55.794668 | orchestrator | 03:01:55.793 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-25 03:01:55.794679 | orchestrator | 03:01:55.793 STDOUT terraform:  + dns_name = (known after apply) 2025-05-25 03:01:55.794691 | orchestrator | 03:01:55.793 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.794702 | orchestrator | 03:01:55.793 STDOUT terraform:  + mac_address = (known after apply) 2025-05-25 03:01:55.794713 | orchestrator | 03:01:55.793 STDOUT terraform:  + network_id = (known after apply) 2025-05-25 03:01:55.794724 | orchestrator | 03:01:55.793 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-25 03:01:55.794735 | orchestrator | 03:01:55.793 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-25 03:01:55.794746 | orchestrator | 03:01:55.793 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.794761 | orchestrator | 03:01:55.793 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-25 03:01:55.794779 | orchestrator | 03:01:55.793 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.794790 | orchestrator | 03:01:55.793 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.794801 | orchestrator | 03:01:55.793 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-25 03:01:55.794812 | orchestrator | 03:01:55.794 STDOUT terraform:  } 2025-05-25 03:01:55.794823 | orchestrator | 03:01:55.794 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.794834 | orchestrator | 03:01:55.794 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-25 03:01:55.794902 | orchestrator | 03:01:55.794 STDOUT terraform:  } 2025-05-25 03:01:55.794914 | orchestrator | 03:01:55.794 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.794925 | orchestrator | 03:01:55.794 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-25 03:01:55.794936 | orchestrator | 03:01:55.794 STDOUT terraform:  } 2025-05-25 03:01:55.794947 | orchestrator | 03:01:55.794 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.794957 | orchestrator | 03:01:55.794 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-25 03:01:55.794967 | orchestrator | 03:01:55.794 STDOUT terraform:  } 2025-05-25 03:01:55.794976 | orchestrator | 03:01:55.794 STDOUT terraform:  + binding (known after apply) 2025-05-25 03:01:55.794986 | orchestrator | 03:01:55.794 STDOUT terraform:  + fixed_ip { 2025-05-25 03:01:55.794996 | orchestrator | 03:01:55.794 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-25 03:01:55.795006 | orchestrator | 03:01:55.794 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 03:01:55.795015 | orchestrator | 03:01:55.794 STDOUT terraform:  } 2025-05-25 03:01:55.795025 | orchestrator | 03:01:55.794 STDOUT terraform:  } 2025-05-25 03:01:55.795035 | orchestrator | 03:01:55.794 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-25 03:01:55.795045 | orchestrator | 03:01:55.794 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-25 03:01:55.795055 | orchestrator | 03:01:55.794 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 03:01:55.795065 | orchestrator | 03:01:55.794 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-25 03:01:55.795087 | orchestrator | 03:01:55.794 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-25 03:01:55.795097 | orchestrator | 03:01:55.794 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.795107 | orchestrator | 03:01:55.794 STDOUT terraform:  + device_id = (known after apply) 2025-05-25 03:01:55.795117 | orchestrator | 03:01:55.794 STDOUT terraform:  + device_owner = (known after apply) 2025-05-25 03:01:55.795126 | orchestrator | 03:01:55.794 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-25 03:01:55.795136 | orchestrator | 03:01:55.794 STDOUT terraform:  + dns_name = (known after apply) 2025-05-25 03:01:55.795146 | orchestrator | 03:01:55.794 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.795156 | orchestrator | 03:01:55.794 STDOUT terraform:  + mac_address = (known after apply) 2025-05-25 03:01:55.795172 | orchestrator | 03:01:55.794 STDOUT terraform:  + network_id = (known after apply) 2025-05-25 03:01:55.795182 | orchestrator | 03:01:55.794 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-25 03:01:55.795192 | orchestrator | 03:01:55.794 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-25 03:01:55.795202 | orchestrator | 03:01:55.794 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.795212 | orchestrator | 03:01:55.794 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-25 03:01:55.795221 | orchestrator | 03:01:55.794 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.795236 | orchestrator | 03:01:55.794 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.795246 | orchestrator | 03:01:55.794 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-25 03:01:55.795256 | orchestrator | 03:01:55.794 STDOUT terraform:  } 2025-05-25 03:01:55.795265 | orchestrator | 03:01:55.794 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.795275 | orchestrator | 03:01:55.794 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-25 03:01:55.795285 | orchestrator | 03:01:55.795 STDOUT terraform:  } 2025-05-25 03:01:55.795294 | orchestrator | 03:01:55.795 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.795304 | orchestrator | 03:01:55.795 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-25 03:01:55.795317 | orchestrator | 03:01:55.795 STDOUT terraform:  } 2025-05-25 03:01:55.795327 | orchestrator | 03:01:55.795 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 03:01:55.795337 | orchestrator | 03:01:55.795 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-25 03:01:55.795347 | orchestrator | 03:01:55.795 STDOUT terraform:  } 2025-05-25 03:01:55.795357 | orchestrator | 03:01:55.795 STDOUT terraform:  + binding (known after apply) 2025-05-25 03:01:55.795367 | orchestrator | 03:01:55.795 STDOUT terraform:  + fixed_ip { 2025-05-25 03:01:55.795376 | orchestrator | 03:01:55.795 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-25 03:01:55.795386 | orchestrator | 03:01:55.795 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 03:01:55.795396 | orchestrator | 03:01:55.795 STDOUT terraform:  } 2025-05-25 03:01:55.795406 | orchestrator | 03:01:55.795 STDOUT terraform:  } 2025-05-25 03:01:55.795415 | orchestrator | 03:01:55.795 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-25 03:01:55.795428 | orchestrator | 03:01:55.795 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-25 03:01:55.795438 | orchestrator | 03:01:55.795 STDOUT terraform:  + force_destroy = false 2025-05-25 03:01:55.795448 | orchestrator | 03:01:55.795 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.795458 | orchestrator | 03:01:55.795 STDOUT terraform:  + port_id = (known after apply) 2025-05-25 03:01:55.795467 | orchestrator | 03:01:55.795 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.795480 | orchestrator | 03:01:55.795 STDOUT terraform:  + router_id = (known after apply) 2025-05-25 03:01:55.795496 | orchestrator | 03:01:55.795 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 03:01:55.795506 | orchestrator | 03:01:55.795 STDOUT terraform:  } 2025-05-25 03:01:55.795519 | orchestrator | 03:01:55.795 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-25 03:01:55.798332 | orchestrator | 03:01:55.795 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-25 03:01:55.798378 | orchestrator | 03:01:55.795 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 03:01:55.798384 | orchestrator | 03:01:55.795 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.798388 | orchestrator | 03:01:55.795 STDOUT terraform:  + availability_zone_hints = [ 2025-05-25 03:01:55.798393 | orchestrator | 03:01:55.795 STDOUT terraform:  + "nova", 2025-05-25 03:01:55.798398 | orchestrator | 03:01:55.795 STDOUT terraform:  ] 2025-05-25 03:01:55.798403 | orchestrator | 03:01:55.795 STDOUT terraform:  + distributed = (known after apply) 2025-05-25 03:01:55.798407 | orchestrator | 03:01:55.795 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-25 03:01:55.798411 | orchestrator | 03:01:55.795 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-25 03:01:55.798416 | orchestrator | 03:01:55.795 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.798420 | orchestrator | 03:01:55.795 STDOUT terraform:  + name = "testbed" 2025-05-25 03:01:55.798423 | orchestrator | 03:01:55.795 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.798427 | orchestrator | 03:01:55.795 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.798431 | orchestrator | 03:01:55.795 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-25 03:01:55.798435 | orchestrator | 03:01:55.795 STDOUT terraform:  } 2025-05-25 03:01:55.798439 | orchestrator | 03:01:55.795 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will b 2025-05-25 03:01:55.798444 | orchestrator | 03:01:55.796 STDOUT terraform: e created 2025-05-25 03:01:55.798448 | orchestrator | 03:01:55.796 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-25 03:01:55.798452 | orchestrator | 03:01:55.796 STDOUT terraform:  + description = "ssh" 2025-05-25 03:01:55.798456 | orchestrator | 03:01:55.796 STDOUT terraform:  + direction = "ingress" 2025-05-25 03:01:55.798460 | orchestrator | 03:01:55.796 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 03:01:55.798464 | orchestrator | 03:01:55.796 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.798468 | orchestrator | 03:01:55.796 STDOUT terraform:  + port_range_max = 22 2025-05-25 03:01:55.798472 | orchestrator | 03:01:55.796 STDOUT terraform:  + port_range_min = 22 2025-05-25 03:01:55.798476 | orchestrator | 03:01:55.796 STDOUT terraform:  + protocol = "tcp" 2025-05-25 03:01:55.798480 | orchestrator | 03:01:55.796 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.798483 | orchestrator | 03:01:55.796 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 03:01:55.798498 | orchestrator | 03:01:55.796 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-25 03:01:55.798502 | orchestrator | 03:01:55.796 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 03:01:55.798506 | orchestrator | 03:01:55.796 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.798509 | orchestrator | 03:01:55.796 STDOUT terraform:  } 2025-05-25 03:01:55.798513 | orchestrator | 03:01:55.796 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-25 03:01:55.798517 | orchestrator | 03:01:55.796 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-25 03:01:55.798521 | orchestrator | 03:01:55.796 STDOUT terraform:  + description = "wireguard" 2025-05-25 03:01:55.798525 | orchestrator | 03:01:55.796 STDOUT terraform:  + direction = "ingress" 2025-05-25 03:01:55.798529 | orchestrator | 03:01:55.796 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 03:01:55.798533 | orchestrator | 03:01:55.796 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.798548 | orchestrator | 03:01:55.796 STDOUT terraform:  + port_range_max = 51820 2025-05-25 03:01:55.798553 | orchestrator | 03:01:55.796 STDOUT terraform:  + port_range_min = 51820 2025-05-25 03:01:55.798557 | orchestrator | 03:01:55.796 STDOUT terraform:  + protocol = "udp" 2025-05-25 03:01:55.798560 | orchestrator | 03:01:55.796 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.798564 | orchestrator | 03:01:55.796 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 03:01:55.798568 | orchestrator | 03:01:55.796 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-25 03:01:55.798572 | orchestrator | 03:01:55.796 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 03:01:55.798581 | orchestrator | 03:01:55.796 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.798585 | orchestrator | 03:01:55.796 STDOUT terraform:  } 2025-05-25 03:01:55.798589 | orchestrator | 03:01:55.796 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-25 03:01:55.798593 | orchestrator | 03:01:55.796 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-25 03:01:55.798597 | orchestrator | 03:01:55.796 STDOUT terraform:  + direction = "ingress" 2025-05-25 03:01:55.798601 | orchestrator | 03:01:55.796 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 03:01:55.798607 | orchestrator | 03:01:55.796 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.798611 | orchestrator | 03:01:55.796 STDOUT terraform:  + protocol = "tcp" 2025-05-25 03:01:55.798615 | orchestrator | 03:01:55.796 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.798619 | orchestrator | 03:01:55.796 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 03:01:55.798622 | orchestrator | 03:01:55.796 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-25 03:01:55.798626 | orchestrator | 03:01:55.796 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 03:01:55.798630 | orchestrator | 03:01:55.797 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.798637 | orchestrator | 03:01:55.797 STDOUT terraform:  } 2025-05-25 03:01:55.798641 | orchestrator | 03:01:55.797 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-25 03:01:55.798645 | orchestrator | 03:01:55.797 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-25 03:01:55.798649 | orchestrator | 03:01:55.797 STDOUT terraform:  + direction = "ingress" 2025-05-25 03:01:55.798652 | orchestrator | 03:01:55.797 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 03:01:55.798656 | orchestrator | 03:01:55.797 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.798660 | orchestrator | 03:01:55.797 STDOUT terraform:  + protocol = "udp" 2025-05-25 03:01:55.798664 | orchestrator | 03:01:55.797 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.798668 | orchestrator | 03:01:55.797 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 03:01:55.798671 | orchestrator | 03:01:55.797 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-25 03:01:55.798675 | orchestrator | 03:01:55.797 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 03:01:55.798679 | orchestrator | 03:01:55.797 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.798682 | orchestrator | 03:01:55.797 STDOUT terraform:  } 2025-05-25 03:01:55.798686 | orchestrator | 03:01:55.797 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-25 03:01:55.798690 | orchestrator | 03:01:55.797 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-25 03:01:55.798694 | orchestrator | 03:01:55.797 STDOUT terraform:  + direction = "ingress" 2025-05-25 03:01:55.798697 | orchestrator | 03:01:55.797 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 03:01:55.798706 | orchestrator | 03:01:55.797 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.798710 | orchestrator | 03:01:55.797 STDOUT terraform:  + protocol = "icmp" 2025-05-25 03:01:55.798714 | orchestrator | 03:01:55.797 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.798718 | orchestrator | 03:01:55.797 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 03:01:55.798721 | orchestrator | 03:01:55.797 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-25 03:01:55.798725 | orchestrator | 03:01:55.797 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 03:01:55.798729 | orchestrator | 03:01:55.797 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.798732 | orchestrator | 03:01:55.797 STDOUT terraform:  } 2025-05-25 03:01:55.798736 | orchestrator | 03:01:55.797 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-25 03:01:55.798740 | orchestrator | 03:01:55.797 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-25 03:01:55.798744 | orchestrator | 03:01:55.797 STDOUT terraform:  + direction = "ingress" 2025-05-25 03:01:55.798748 | orchestrator | 03:01:55.797 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 03:01:55.798754 | orchestrator | 03:01:55.797 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.798761 | orchestrator | 03:01:55.797 STDOUT terraform:  + protocol = "tcp" 2025-05-25 03:01:55.798764 | orchestrator | 03:01:55.797 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.798768 | orchestrator | 03:01:55.797 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 03:01:55.798772 | orchestrator | 03:01:55.797 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-25 03:01:55.798776 | orchestrator | 03:01:55.797 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 03:01:55.798780 | orchestrator | 03:01:55.797 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.798784 | orchestrator | 03:01:55.797 STDOUT terraform:  } 2025-05-25 03:01:55.798787 | orchestrator | 03:01:55.797 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-25 03:01:55.798791 | orchestrator | 03:01:55.798 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-25 03:01:55.798795 | orchestrator | 03:01:55.798 STDOUT terraform:  + direction = "ingress" 2025-05-25 03:01:55.798799 | orchestrator | 03:01:55.798 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 03:01:55.798802 | orchestrator | 03:01:55.798 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.798806 | orchestrator | 03:01:55.798 STDOUT terraform:  + protocol = "udp" 2025-05-25 03:01:55.798810 | orchestrator | 03:01:55.798 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.798814 | orchestrator | 03:01:55.798 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 03:01:55.798818 | orchestrator | 03:01:55.798 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-25 03:01:55.798821 | orchestrator | 03:01:55.798 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 03:01:55.798825 | orchestrator | 03:01:55.798 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.798829 | orchestrator | 03:01:55.798 STDOUT terraform:  } 2025-05-25 03:01:55.798833 | orchestrator | 03:01:55.798 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-25 03:01:55.798872 | orchestrator | 03:01:55.798 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-25 03:01:55.798876 | orchestrator | 03:01:55.798 STDOUT terraform:  + direction = "ingress" 2025-05-25 03:01:55.798880 | orchestrator | 03:01:55.798 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 03:01:55.798883 | orchestrator | 03:01:55.798 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.798890 | orchestrator | 03:01:55.798 STDOUT terraform:  + protocol = "icmp" 2025-05-25 03:01:55.798894 | orchestrator | 03:01:55.798 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.798898 | orchestrator | 03:01:55.798 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 03:01:55.798902 | orchestrator | 03:01:55.798 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-25 03:01:55.798909 | orchestrator | 03:01:55.798 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 03:01:55.798913 | orchestrator | 03:01:55.798 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.798917 | orchestrator | 03:01:55.798 STDOUT terraform:  } 2025-05-25 03:01:55.798920 | orchestrator | 03:01:55.798 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-25 03:01:55.798924 | orchestrator | 03:01:55.798 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-25 03:01:55.798928 | orchestrator | 03:01:55.798 STDOUT terraform:  + description = "vrrp" 2025-05-25 03:01:55.798932 | orchestrator | 03:01:55.798 STDOUT terraform:  + direction = "ingress" 2025-05-25 03:01:55.798936 | orchestrator | 03:01:55.798 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 03:01:55.798939 | orchestrator | 03:01:55.798 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.798949 | orchestrator | 03:01:55.798 STDOUT terraform:  + protocol = "112" 2025-05-25 03:01:55.798953 | orchestrator | 03:01:55.798 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.798958 | orchestrator | 03:01:55.798 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 03:01:55.798962 | orchestrator | 03:01:55.798 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-25 03:01:55.798966 | orchestrator | 03:01:55.798 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 03:01:55.798972 | orchestrator | 03:01:55.798 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.799000 | orchestrator | 03:01:55.798 STDOUT terraform:  } 2025-05-25 03:01:55.799049 | orchestrator | 03:01:55.798 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-25 03:01:55.799094 | orchestrator | 03:01:55.799 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-25 03:01:55.799122 | orchestrator | 03:01:55.799 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.799158 | orchestrator | 03:01:55.799 STDOUT terraform:  + description = "management security group" 2025-05-25 03:01:55.799188 | orchestrator | 03:01:55.799 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.799217 | orchestrator | 03:01:55.799 STDOUT terraform:  + name = "testbed-management" 2025-05-25 03:01:55.799245 | orchestrator | 03:01:55.799 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.799276 | orchestrator | 03:01:55.799 STDOUT terraform:  + stateful = (known after apply) 2025-05-25 03:01:55.799302 | orchestrator | 03:01:55.799 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.799309 | orchestrator | 03:01:55.799 STDOUT terraform:  } 2025-05-25 03:01:55.799359 | orchestrator | 03:01:55.799 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-25 03:01:55.799408 | orchestrator | 03:01:55.799 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-25 03:01:55.799436 | orchestrator | 03:01:55.799 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.799464 | orchestrator | 03:01:55.799 STDOUT terraform:  + description = "node security group" 2025-05-25 03:01:55.799494 | orchestrator | 03:01:55.799 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.799519 | orchestrator | 03:01:55.799 STDOUT terraform:  + name = "testbed-node" 2025-05-25 03:01:55.799547 | orchestrator | 03:01:55.799 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.799575 | orchestrator | 03:01:55.799 STDOUT terraform:  + stateful = (known after apply) 2025-05-25 03:01:55.799603 | orchestrator | 03:01:55.799 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.799610 | orchestrator | 03:01:55.799 STDOUT terraform:  } 2025-05-25 03:01:55.799676 | orchestrator | 03:01:55.799 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-25 03:01:55.799704 | orchestrator | 03:01:55.799 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-25 03:01:55.799735 | orchestrator | 03:01:55.799 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 03:01:55.799766 | orchestrator | 03:01:55.799 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-25 03:01:55.799772 | orchestrator | 03:01:55.799 STDOUT terraform:  + dns_nameservers = [ 2025-05-25 03:01:55.799799 | orchestrator | 03:01:55.799 STDOUT terraform:  + "8.8.8.8", 2025-05-25 03:01:55.799805 | orchestrator | 03:01:55.799 STDOUT terraform:  + "9.9.9.9", 2025-05-25 03:01:55.799824 | orchestrator | 03:01:55.799 STDOUT terraform:  ] 2025-05-25 03:01:55.799830 | orchestrator | 03:01:55.799 STDOUT terraform:  + enable_dhcp = true 2025-05-25 03:01:55.799880 | orchestrator | 03:01:55.799 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-25 03:01:55.799912 | orchestrator | 03:01:55.799 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.799918 | orchestrator | 03:01:55.799 STDOUT terraform:  + ip_version = 4 2025-05-25 03:01:55.799958 | orchestrator | 03:01:55.799 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-25 03:01:55.799988 | orchestrator | 03:01:55.799 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-25 03:01:55.800026 | orchestrator | 03:01:55.799 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-25 03:01:55.800058 | orchestrator | 03:01:55.800 STDOUT terraform:  + network_id = (known after apply) 2025-05-25 03:01:55.800083 | orchestrator | 03:01:55.800 STDOUT terraform:  + no_gateway = false 2025-05-25 03:01:55.800110 | orchestrator | 03:01:55.800 STDOUT terraform:  + region = (known after apply) 2025-05-25 03:01:55.800140 | orchestrator | 03:01:55.800 STDOUT terraform:  + service_types = (known after apply) 2025-05-25 03:01:55.800172 | orchestrator | 03:01:55.800 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 03:01:55.800178 | orchestrator | 03:01:55.800 STDOUT terraform:  + allocation_pool { 2025-05-25 03:01:55.800210 | orchestrator | 03:01:55.800 STDOUT terraform:  + end = "192.168.31.250" 2025-05-25 03:01:55.800235 | orchestrator | 03:01:55.800 STDOUT terraform:  + start = "192.168.31.200" 2025-05-25 03:01:55.800241 | orchestrator | 03:01:55.800 STDOUT terraform:  } 2025-05-25 03:01:55.800246 | orchestrator | 03:01:55.800 STDOUT terraform:  } 2025-05-25 03:01:55.800280 | orchestrator | 03:01:55.800 STDOUT terraform:  # terraform_data.image will be created 2025-05-25 03:01:55.800326 | orchestrator | 03:01:55.800 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-25 03:01:55.800334 | orchestrator | 03:01:55.800 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.800338 | orchestrator | 03:01:55.800 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-25 03:01:55.800369 | orchestrator | 03:01:55.800 STDOUT terraform:  + output = (known after apply) 2025-05-25 03:01:55.800374 | orchestrator | 03:01:55.800 STDOUT terraform:  } 2025-05-25 03:01:55.800407 | orchestrator | 03:01:55.800 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-25 03:01:55.800435 | orchestrator | 03:01:55.800 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-25 03:01:55.800459 | orchestrator | 03:01:55.800 STDOUT terraform:  + id = (known after apply) 2025-05-25 03:01:55.800479 | orchestrator | 03:01:55.800 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-25 03:01:55.800498 | orchestrator | 03:01:55.800 STDOUT terraform:  + output = (known after apply) 2025-05-25 03:01:55.800502 | orchestrator | 03:01:55.800 STDOUT terraform:  } 2025-05-25 03:01:55.800535 | orchestrator | 03:01:55.800 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-25 03:01:55.800541 | orchestrator | 03:01:55.800 STDOUT terraform: Changes to Outputs: 2025-05-25 03:01:55.800571 | orchestrator | 03:01:55.800 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-25 03:01:55.800595 | orchestrator | 03:01:55.800 STDOUT terraform:  + private_key = (sensitive value) 2025-05-25 03:01:56.015450 | orchestrator | 03:01:56.015 STDOUT terraform: terraform_data.image: Creating... 2025-05-25 03:01:56.015549 | orchestrator | 03:01:56.015 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-25 03:01:56.015580 | orchestrator | 03:01:56.015 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=8458e092-870c-5bfc-6682-a8d0fa2f024d] 2025-05-25 03:01:56.015701 | orchestrator | 03:01:56.015 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=61dffe15-d3a8-216d-51f9-14f33dc64817] 2025-05-25 03:01:56.022445 | orchestrator | 03:01:56.022 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-25 03:01:56.030873 | orchestrator | 03:01:56.030 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-25 03:01:56.030963 | orchestrator | 03:01:56.030 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-25 03:01:56.033591 | orchestrator | 03:01:56.033 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-25 03:01:56.036203 | orchestrator | 03:01:56.036 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-25 03:01:56.036942 | orchestrator | 03:01:56.036 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-25 03:01:56.040745 | orchestrator | 03:01:56.040 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-25 03:01:56.046727 | orchestrator | 03:01:56.046 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-25 03:01:56.048025 | orchestrator | 03:01:56.047 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-25 03:01:56.049431 | orchestrator | 03:01:56.049 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-25 03:01:56.522810 | orchestrator | 03:01:56.522 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-25 03:01:56.531631 | orchestrator | 03:01:56.531 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-25 03:01:56.589918 | orchestrator | 03:01:56.589 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-05-25 03:01:56.596304 | orchestrator | 03:01:56.596 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-25 03:01:56.819450 | orchestrator | 03:01:56.819 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-25 03:01:56.831756 | orchestrator | 03:01:56.831 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-25 03:02:02.038567 | orchestrator | 03:02:02.038 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=5099023b-b461-4d23-bd65-3e4e5e51cf8c] 2025-05-25 03:02:02.050281 | orchestrator | 03:02:02.049 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-25 03:02:06.036415 | orchestrator | 03:02:06.036 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-25 03:02:06.037384 | orchestrator | 03:02:06.037 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-25 03:02:06.041926 | orchestrator | 03:02:06.041 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-25 03:02:06.048137 | orchestrator | 03:02:06.047 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-25 03:02:06.049279 | orchestrator | 03:02:06.049 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-25 03:02:06.050504 | orchestrator | 03:02:06.050 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-25 03:02:06.532364 | orchestrator | 03:02:06.532 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-25 03:02:06.597913 | orchestrator | 03:02:06.597 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-25 03:02:06.623957 | orchestrator | 03:02:06.623 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=cdfa8505-de86-48ff-8ed6-b6e1381a94b2] 2025-05-25 03:02:06.632066 | orchestrator | 03:02:06.631 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-25 03:02:06.640561 | orchestrator | 03:02:06.640 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=4276f8fa-1a41-4d3c-8190-a1d2d3b80049] 2025-05-25 03:02:06.651449 | orchestrator | 03:02:06.651 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-25 03:02:06.667727 | orchestrator | 03:02:06.667 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=8968a7f7-851b-405b-80f4-de48ab1dffee] 2025-05-25 03:02:06.677650 | orchestrator | 03:02:06.677 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=b0e50223-c4d0-48f7-a5f8-d1963b067c82] 2025-05-25 03:02:06.679773 | orchestrator | 03:02:06.679 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=dac67b12-4a3b-49b0-a18f-dd9740769fda] 2025-05-25 03:02:06.679976 | orchestrator | 03:02:06.679 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-25 03:02:06.687784 | orchestrator | 03:02:06.687 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=603d0154-8a06-450e-a743-756d85b1bc6a] 2025-05-25 03:02:06.687898 | orchestrator | 03:02:06.687 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-25 03:02:06.688267 | orchestrator | 03:02:06.688 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-25 03:02:06.701397 | orchestrator | 03:02:06.701 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-25 03:02:06.706068 | orchestrator | 03:02:06.705 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=3cb16218c7633609f2145bbbdae86ea51ea6cb3a] 2025-05-25 03:02:06.719147 | orchestrator | 03:02:06.718 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-25 03:02:06.726449 | orchestrator | 03:02:06.724 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=757976709184e4e1f7345008740744ad2af90404] 2025-05-25 03:02:06.726745 | orchestrator | 03:02:06.726 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=38e86a76-d592-4447-9c79-2151d2192c3f] 2025-05-25 03:02:06.739376 | orchestrator | 03:02:06.739 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-25 03:02:06.739448 | orchestrator | 03:02:06.739 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-25 03:02:06.776482 | orchestrator | 03:02:06.776 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=17d1c6f1-1305-4025-b6c8-ee1be555c001] 2025-05-25 03:02:06.833028 | orchestrator | 03:02:06.832 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-25 03:02:07.025113 | orchestrator | 03:02:07.024 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=201f277c-fdb2-416e-b305-0d8ba90b32cd] 2025-05-25 03:02:12.053034 | orchestrator | 03:02:12.052 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-25 03:02:12.397931 | orchestrator | 03:02:12.397 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=e659d218-d092-4e32-8aa6-14fd719ec7d5] 2025-05-25 03:02:12.662150 | orchestrator | 03:02:12.661 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=6fc9f751-26bb-4b8b-8b79-883d94d0b2f2] 2025-05-25 03:02:12.667894 | orchestrator | 03:02:12.667 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-25 03:02:16.632957 | orchestrator | 03:02:16.632 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-25 03:02:16.652046 | orchestrator | 03:02:16.651 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-25 03:02:16.681461 | orchestrator | 03:02:16.681 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-25 03:02:16.689914 | orchestrator | 03:02:16.689 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-25 03:02:16.690074 | orchestrator | 03:02:16.689 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-25 03:02:16.740453 | orchestrator | 03:02:16.740 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-25 03:02:17.008401 | orchestrator | 03:02:17.008 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0] 2025-05-25 03:02:17.029046 | orchestrator | 03:02:17.028 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=8a4e249f-8a05-4326-b566-23f41d92ff9f] 2025-05-25 03:02:17.049006 | orchestrator | 03:02:17.048 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=43c767cb-0159-4619-b6d4-e498fa6e5c83] 2025-05-25 03:02:17.091287 | orchestrator | 03:02:17.091 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7] 2025-05-25 03:02:17.091682 | orchestrator | 03:02:17.091 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=87443a00-a40d-492e-8034-179827711ad7] 2025-05-25 03:02:17.104623 | orchestrator | 03:02:17.104 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=1d30d4bb-c629-4212-924a-c3972d50febe] 2025-05-25 03:02:21.057893 | orchestrator | 03:02:21.057 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=7289f183-107a-456b-8c31-8a7e22c4df3a] 2025-05-25 03:02:21.065010 | orchestrator | 03:02:21.064 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-25 03:02:21.066098 | orchestrator | 03:02:21.065 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-25 03:02:21.066743 | orchestrator | 03:02:21.066 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-25 03:02:21.267214 | orchestrator | 03:02:21.266 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=f54416d6-21a1-43a9-abea-80f2fb356eef] 2025-05-25 03:02:21.282506 | orchestrator | 03:02:21.282 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-25 03:02:21.283120 | orchestrator | 03:02:21.282 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-25 03:02:21.284482 | orchestrator | 03:02:21.284 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-25 03:02:21.285589 | orchestrator | 03:02:21.285 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-25 03:02:21.290427 | orchestrator | 03:02:21.290 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-25 03:02:21.291296 | orchestrator | 03:02:21.291 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-25 03:02:21.293384 | orchestrator | 03:02:21.293 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-25 03:02:21.294404 | orchestrator | 03:02:21.294 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=b1106d3f-7da8-430e-be1c-2b2d2c77f9da] 2025-05-25 03:02:21.302759 | orchestrator | 03:02:21.302 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-25 03:02:21.307401 | orchestrator | 03:02:21.307 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-25 03:02:21.434010 | orchestrator | 03:02:21.433 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=04d67f2c-c25d-4324-8814-aa28055b649e] 2025-05-25 03:02:21.445648 | orchestrator | 03:02:21.445 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-25 03:02:21.585326 | orchestrator | 03:02:21.584 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=91b8d2ee-6bdf-4ad0-9fb1-1780734cee15] 2025-05-25 03:02:21.593614 | orchestrator | 03:02:21.593 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-25 03:02:21.744289 | orchestrator | 03:02:21.743 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=8436b583-fa25-487e-84fb-4754b003a5c2] 2025-05-25 03:02:21.751007 | orchestrator | 03:02:21.750 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-25 03:02:21.822863 | orchestrator | 03:02:21.822 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=f127137e-3aa5-4760-9bbf-e5f07bb7c4ef] 2025-05-25 03:02:21.830536 | orchestrator | 03:02:21.830 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-25 03:02:21.972274 | orchestrator | 03:02:21.971 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=47ee6ff7-88fe-4a1e-a601-00e63481b3b7] 2025-05-25 03:02:21.988527 | orchestrator | 03:02:21.988 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-25 03:02:22.299482 | orchestrator | 03:02:22.298 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=90a4969b-f87c-4785-bdfe-8aeb1d6f3c26] 2025-05-25 03:02:22.310139 | orchestrator | 03:02:22.309 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-25 03:02:22.482617 | orchestrator | 03:02:22.482 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=8c27d404-ecb2-4745-81e4-d49568db34e6] 2025-05-25 03:02:22.490325 | orchestrator | 03:02:22.489 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=4c7a2ec9-395d-425f-9ef8-98393c4c93da] 2025-05-25 03:02:22.490911 | orchestrator | 03:02:22.490 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-25 03:02:22.649432 | orchestrator | 03:02:22.648 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=e7c34d48-6e80-499e-bcc7-8de8170b321b] 2025-05-25 03:02:26.975446 | orchestrator | 03:02:26.975 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=ff9166fc-e8fe-4f86-8b10-8eda5cb423d2] 2025-05-25 03:02:27.188978 | orchestrator | 03:02:27.188 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=28667457-5ca7-4e22-839b-f680c968aae9] 2025-05-25 03:02:27.190816 | orchestrator | 03:02:27.190 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=61b297c4-642c-4505-bdb7-5a431deec364] 2025-05-25 03:02:27.218671 | orchestrator | 03:02:27.218 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=e53b6fd3-a3e0-4a90-be49-0c9856876627] 2025-05-25 03:02:27.437123 | orchestrator | 03:02:27.436 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=3abd4f4d-f126-4ff3-8d2b-b15a69c75e14] 2025-05-25 03:02:27.529002 | orchestrator | 03:02:27.528 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=ba7f66bc-6830-45a7-893f-b443cf7beb30] 2025-05-25 03:02:27.541187 | orchestrator | 03:02:27.540 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 7s [id=803f7b98-fbf4-4206-8ed3-e92ead5af94c] 2025-05-25 03:02:28.265731 | orchestrator | 03:02:28.265 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=a2f38d20-ebb6-4a78-80fc-050712f7d810] 2025-05-25 03:02:28.283873 | orchestrator | 03:02:28.283 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-25 03:02:28.307942 | orchestrator | 03:02:28.304 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-25 03:02:28.308010 | orchestrator | 03:02:28.304 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-25 03:02:28.308015 | orchestrator | 03:02:28.306 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-25 03:02:28.308117 | orchestrator | 03:02:28.308 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-25 03:02:28.312770 | orchestrator | 03:02:28.312 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-25 03:02:28.329195 | orchestrator | 03:02:28.329 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-25 03:02:34.574104 | orchestrator | 03:02:34.571 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=5fcb452f-d045-4fbd-8312-49c1637e96bb] 2025-05-25 03:02:34.580366 | orchestrator | 03:02:34.580 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-25 03:02:34.587919 | orchestrator | 03:02:34.587 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-25 03:02:34.592609 | orchestrator | 03:02:34.592 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=d17dd735a129659a928c9b1f851c665f5b20efba] 2025-05-25 03:02:34.593657 | orchestrator | 03:02:34.593 STDOUT terraform: local_file.inventory: Creating... 2025-05-25 03:02:34.598045 | orchestrator | 03:02:34.597 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=a71f0547039fdd4da7db1e87200152df2ff737d1] 2025-05-25 03:02:35.383409 | orchestrator | 03:02:35.382 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=5fcb452f-d045-4fbd-8312-49c1637e96bb] 2025-05-25 03:02:38.306425 | orchestrator | 03:02:38.306 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-25 03:02:38.306544 | orchestrator | 03:02:38.306 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-25 03:02:38.312462 | orchestrator | 03:02:38.312 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-25 03:02:38.312612 | orchestrator | 03:02:38.312 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-25 03:02:38.314642 | orchestrator | 03:02:38.314 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-25 03:02:38.330216 | orchestrator | 03:02:38.329 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-25 03:02:48.307098 | orchestrator | 03:02:48.306 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-25 03:02:48.307220 | orchestrator | 03:02:48.306 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-25 03:02:48.313254 | orchestrator | 03:02:48.313 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-25 03:02:48.313352 | orchestrator | 03:02:48.313 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-25 03:02:48.315538 | orchestrator | 03:02:48.315 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-25 03:02:48.331094 | orchestrator | 03:02:48.330 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-25 03:02:48.838431 | orchestrator | 03:02:48.838 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=2d9c87e4-41bd-49ec-9649-e7b41a8f8afd] 2025-05-25 03:02:48.917358 | orchestrator | 03:02:48.916 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=44ba5ca8-a5f6-450f-873e-1dd2967a729a] 2025-05-25 03:02:49.031020 | orchestrator | 03:02:49.030 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=a03722cc-3ff6-4fa4-aa12-aa95bbb1458d] 2025-05-25 03:02:58.309331 | orchestrator | 03:02:58.309 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-05-25 03:02:58.313491 | orchestrator | 03:02:58.313 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-05-25 03:02:58.331972 | orchestrator | 03:02:58.331 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-05-25 03:02:58.758880 | orchestrator | 03:02:58.758 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=6cbe9be3-78a8-42d9-992c-3b5ca21d34d8] 2025-05-25 03:02:58.962887 | orchestrator | 03:02:58.962 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=2af3813b-d24a-4924-a9b5-0d6fd3d70131] 2025-05-25 03:02:58.982719 | orchestrator | 03:02:58.982 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=71fbc9f5-1f91-493d-9d7e-cad1ae34c712] 2025-05-25 03:02:59.004709 | orchestrator | 03:02:59.004 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-25 03:02:59.012190 | orchestrator | 03:02:59.012 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=8732860865113341494] 2025-05-25 03:02:59.017264 | orchestrator | 03:02:59.017 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-25 03:02:59.018730 | orchestrator | 03:02:59.018 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-25 03:02:59.019349 | orchestrator | 03:02:59.019 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-25 03:02:59.019540 | orchestrator | 03:02:59.019 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-25 03:02:59.019654 | orchestrator | 03:02:59.019 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-25 03:02:59.022722 | orchestrator | 03:02:59.022 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-25 03:02:59.030696 | orchestrator | 03:02:59.030 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-25 03:02:59.031430 | orchestrator | 03:02:59.031 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-25 03:02:59.031455 | orchestrator | 03:02:59.031 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-25 03:02:59.039993 | orchestrator | 03:02:59.039 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-25 03:03:04.333643 | orchestrator | 03:03:04.333 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=71fbc9f5-1f91-493d-9d7e-cad1ae34c712/38e86a76-d592-4447-9c79-2151d2192c3f] 2025-05-25 03:03:04.352923 | orchestrator | 03:03:04.352 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=44ba5ca8-a5f6-450f-873e-1dd2967a729a/603d0154-8a06-450e-a743-756d85b1bc6a] 2025-05-25 03:03:04.379344 | orchestrator | 03:03:04.378 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=2af3813b-d24a-4924-a9b5-0d6fd3d70131/dac67b12-4a3b-49b0-a18f-dd9740769fda] 2025-05-25 03:03:04.382129 | orchestrator | 03:03:04.381 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=71fbc9f5-1f91-493d-9d7e-cad1ae34c712/b0e50223-c4d0-48f7-a5f8-d1963b067c82] 2025-05-25 03:03:04.404730 | orchestrator | 03:03:04.404 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=44ba5ca8-a5f6-450f-873e-1dd2967a729a/8968a7f7-851b-405b-80f4-de48ab1dffee] 2025-05-25 03:03:04.415448 | orchestrator | 03:03:04.414 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=2af3813b-d24a-4924-a9b5-0d6fd3d70131/4276f8fa-1a41-4d3c-8190-a1d2d3b80049] 2025-05-25 03:03:04.437185 | orchestrator | 03:03:04.436 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=71fbc9f5-1f91-493d-9d7e-cad1ae34c712/17d1c6f1-1305-4025-b6c8-ee1be555c001] 2025-05-25 03:03:04.446527 | orchestrator | 03:03:04.446 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=44ba5ca8-a5f6-450f-873e-1dd2967a729a/201f277c-fdb2-416e-b305-0d8ba90b32cd] 2025-05-25 03:03:04.464764 | orchestrator | 03:03:04.464 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=2af3813b-d24a-4924-a9b5-0d6fd3d70131/cdfa8505-de86-48ff-8ed6-b6e1381a94b2] 2025-05-25 03:03:09.044712 | orchestrator | 03:03:09.044 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-25 03:03:19.045630 | orchestrator | 03:03:19.045 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-25 03:03:19.426959 | orchestrator | 03:03:19.426 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=f3bf55a8-d63e-4604-a933-7220ef0b3d16] 2025-05-25 03:03:19.451062 | orchestrator | 03:03:19.450 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-25 03:03:19.451262 | orchestrator | 03:03:19.450 STDOUT terraform: Outputs: 2025-05-25 03:03:19.451283 | orchestrator | 03:03:19.451 STDOUT terraform: manager_address = 2025-05-25 03:03:19.451296 | orchestrator | 03:03:19.451 STDOUT terraform: private_key = 2025-05-25 03:03:19.700607 | orchestrator | ok: Runtime: 0:01:34.766577 2025-05-25 03:03:19.824774 | 2025-05-25 03:03:19.825141 | TASK [Create infrastructure (stable)] 2025-05-25 03:03:20.394670 | orchestrator | skipping: Conditional result was False 2025-05-25 03:03:20.403129 | 2025-05-25 03:03:20.403236 | TASK [Fetch manager address] 2025-05-25 03:03:21.060676 | orchestrator | ok 2025-05-25 03:03:21.072410 | 2025-05-25 03:03:21.072549 | TASK [Set manager_host address] 2025-05-25 03:03:21.200199 | orchestrator | ok 2025-05-25 03:03:21.239675 | 2025-05-25 03:03:21.239798 | LOOP [Update ansible collections] 2025-05-25 03:03:22.453808 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-25 03:03:22.454037 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-25 03:03:22.454074 | orchestrator | Starting galaxy collection install process 2025-05-25 03:03:22.454099 | orchestrator | Process install dependency map 2025-05-25 03:03:22.454122 | orchestrator | Starting collection install process 2025-05-25 03:03:22.454143 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-05-25 03:03:22.454175 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-05-25 03:03:22.454200 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-25 03:03:22.454246 | orchestrator | ok: Item: commons Runtime: 0:00:00.581169 2025-05-25 03:03:23.287969 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-25 03:03:23.288118 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-25 03:03:23.288151 | orchestrator | Starting galaxy collection install process 2025-05-25 03:03:23.288175 | orchestrator | Process install dependency map 2025-05-25 03:03:23.288197 | orchestrator | Starting collection install process 2025-05-25 03:03:23.288217 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-05-25 03:03:23.288238 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-05-25 03:03:23.288257 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-25 03:03:23.288286 | orchestrator | ok: Item: services Runtime: 0:00:00.581359 2025-05-25 03:03:23.305646 | 2025-05-25 03:03:23.305758 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-25 03:03:33.940756 | orchestrator | ok 2025-05-25 03:03:33.968615 | 2025-05-25 03:03:33.968773 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-25 03:04:34.020912 | orchestrator | ok 2025-05-25 03:04:34.031688 | 2025-05-25 03:04:34.031830 | TASK [Fetch manager ssh hostkey] 2025-05-25 03:04:35.645813 | orchestrator | Output suppressed because no_log was given 2025-05-25 03:04:35.653745 | 2025-05-25 03:04:35.653894 | TASK [Get ssh keypair from terraform environment] 2025-05-25 03:04:36.230649 | orchestrator | ok: Runtime: 0:00:00.009515 2025-05-25 03:04:36.238631 | 2025-05-25 03:04:36.238762 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-25 03:04:36.292712 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-25 03:04:36.301156 | 2025-05-25 03:04:36.301312 | TASK [Run manager part 0] 2025-05-25 03:04:37.174360 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-25 03:04:37.218552 | orchestrator | 2025-05-25 03:04:37.218599 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-25 03:04:37.218607 | orchestrator | 2025-05-25 03:04:37.218620 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-25 03:04:39.118611 | orchestrator | ok: [testbed-manager] 2025-05-25 03:04:39.118673 | orchestrator | 2025-05-25 03:04:39.118700 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-25 03:04:39.118713 | orchestrator | 2025-05-25 03:04:39.118726 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 03:04:41.046289 | orchestrator | ok: [testbed-manager] 2025-05-25 03:04:41.046336 | orchestrator | 2025-05-25 03:04:41.046342 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-25 03:04:41.755875 | orchestrator | ok: [testbed-manager] 2025-05-25 03:04:41.756021 | orchestrator | 2025-05-25 03:04:41.756047 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-25 03:04:41.818317 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:04:41.818387 | orchestrator | 2025-05-25 03:04:41.818403 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-25 03:04:41.859099 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:04:41.859161 | orchestrator | 2025-05-25 03:04:41.859174 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-25 03:04:41.892770 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:04:41.892831 | orchestrator | 2025-05-25 03:04:41.892838 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-25 03:04:41.923585 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:04:41.923658 | orchestrator | 2025-05-25 03:04:41.923668 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-25 03:04:41.954850 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:04:41.954911 | orchestrator | 2025-05-25 03:04:41.954922 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-25 03:04:41.997985 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:04:41.998063 | orchestrator | 2025-05-25 03:04:41.998073 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-25 03:04:42.032416 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:04:42.032474 | orchestrator | 2025-05-25 03:04:42.032483 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-25 03:04:42.897864 | orchestrator | changed: [testbed-manager] 2025-05-25 03:04:42.897921 | orchestrator | 2025-05-25 03:04:42.897930 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-25 03:07:45.205730 | orchestrator | changed: [testbed-manager] 2025-05-25 03:07:45.205803 | orchestrator | 2025-05-25 03:07:45.205821 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-25 03:09:18.685131 | orchestrator | changed: [testbed-manager] 2025-05-25 03:09:18.685228 | orchestrator | 2025-05-25 03:09:18.685245 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-25 03:09:38.800294 | orchestrator | changed: [testbed-manager] 2025-05-25 03:09:38.800341 | orchestrator | 2025-05-25 03:09:38.800351 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-25 03:09:47.536957 | orchestrator | changed: [testbed-manager] 2025-05-25 03:09:47.537049 | orchestrator | 2025-05-25 03:09:47.537065 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-25 03:09:47.587402 | orchestrator | ok: [testbed-manager] 2025-05-25 03:09:47.587476 | orchestrator | 2025-05-25 03:09:47.587490 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-25 03:09:48.371048 | orchestrator | ok: [testbed-manager] 2025-05-25 03:09:48.371143 | orchestrator | 2025-05-25 03:09:48.371162 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-25 03:09:49.112887 | orchestrator | changed: [testbed-manager] 2025-05-25 03:09:49.112965 | orchestrator | 2025-05-25 03:09:49.112980 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-25 03:09:55.474700 | orchestrator | changed: [testbed-manager] 2025-05-25 03:09:55.474791 | orchestrator | 2025-05-25 03:09:55.474830 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-25 03:10:01.467721 | orchestrator | changed: [testbed-manager] 2025-05-25 03:10:01.467818 | orchestrator | 2025-05-25 03:10:01.467836 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-25 03:10:04.129655 | orchestrator | changed: [testbed-manager] 2025-05-25 03:10:04.129698 | orchestrator | 2025-05-25 03:10:04.129707 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-25 03:10:05.902827 | orchestrator | changed: [testbed-manager] 2025-05-25 03:10:05.902897 | orchestrator | 2025-05-25 03:10:05.902908 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-25 03:10:06.996914 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-25 03:10:06.996959 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-25 03:10:06.996966 | orchestrator | 2025-05-25 03:10:06.996974 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-25 03:10:07.042056 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-25 03:10:07.042142 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-25 03:10:07.042159 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-25 03:10:07.042173 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-25 03:10:10.150014 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-25 03:10:10.150081 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-25 03:10:10.150087 | orchestrator | 2025-05-25 03:10:10.150094 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-25 03:10:10.717027 | orchestrator | changed: [testbed-manager] 2025-05-25 03:10:10.717066 | orchestrator | 2025-05-25 03:10:10.717073 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-25 03:13:31.494189 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-25 03:13:31.494303 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-25 03:13:31.494323 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-25 03:13:31.494336 | orchestrator | 2025-05-25 03:13:31.494349 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-25 03:13:33.804417 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-25 03:13:33.804547 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-25 03:13:33.804566 | orchestrator | 2025-05-25 03:13:33.804580 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-25 03:13:33.804592 | orchestrator | 2025-05-25 03:13:33.804604 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 03:13:35.226672 | orchestrator | ok: [testbed-manager] 2025-05-25 03:13:35.226721 | orchestrator | 2025-05-25 03:13:35.226731 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-25 03:13:35.258287 | orchestrator | ok: [testbed-manager] 2025-05-25 03:13:35.258331 | orchestrator | 2025-05-25 03:13:35.258339 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-25 03:13:35.320853 | orchestrator | ok: [testbed-manager] 2025-05-25 03:13:35.320909 | orchestrator | 2025-05-25 03:13:35.320916 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-25 03:13:36.105945 | orchestrator | changed: [testbed-manager] 2025-05-25 03:13:36.106085 | orchestrator | 2025-05-25 03:13:36.106109 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-25 03:13:36.827176 | orchestrator | changed: [testbed-manager] 2025-05-25 03:13:36.827269 | orchestrator | 2025-05-25 03:13:36.827285 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-25 03:13:38.207137 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-25 03:13:38.207230 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-25 03:13:38.207247 | orchestrator | 2025-05-25 03:13:38.207281 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-25 03:13:39.545907 | orchestrator | changed: [testbed-manager] 2025-05-25 03:13:39.546043 | orchestrator | 2025-05-25 03:13:39.546065 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-25 03:13:41.286771 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-25 03:13:41.286874 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-25 03:13:41.286889 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-25 03:13:41.286901 | orchestrator | 2025-05-25 03:13:41.286913 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-25 03:13:41.860670 | orchestrator | changed: [testbed-manager] 2025-05-25 03:13:41.860715 | orchestrator | 2025-05-25 03:13:41.860724 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-25 03:13:41.931415 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:13:41.931456 | orchestrator | 2025-05-25 03:13:41.931465 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-25 03:13:42.771899 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 03:13:42.771989 | orchestrator | changed: [testbed-manager] 2025-05-25 03:13:42.772006 | orchestrator | 2025-05-25 03:13:42.772019 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-25 03:13:42.810353 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:13:42.810452 | orchestrator | 2025-05-25 03:13:42.810469 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-25 03:13:42.849195 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:13:42.849293 | orchestrator | 2025-05-25 03:13:42.849318 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-25 03:13:42.885719 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:13:42.885794 | orchestrator | 2025-05-25 03:13:42.885808 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-25 03:13:42.937042 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:13:42.937125 | orchestrator | 2025-05-25 03:13:42.937140 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-25 03:13:43.628067 | orchestrator | ok: [testbed-manager] 2025-05-25 03:13:43.628113 | orchestrator | 2025-05-25 03:13:43.628121 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-25 03:13:43.628129 | orchestrator | 2025-05-25 03:13:43.628138 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 03:13:45.019961 | orchestrator | ok: [testbed-manager] 2025-05-25 03:13:45.019995 | orchestrator | 2025-05-25 03:13:45.020000 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-25 03:13:45.977779 | orchestrator | changed: [testbed-manager] 2025-05-25 03:13:45.977814 | orchestrator | 2025-05-25 03:13:45.977820 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:13:45.977826 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-25 03:13:45.977830 | orchestrator | 2025-05-25 03:13:46.208681 | orchestrator | ok: Runtime: 0:09:09.445815 2025-05-25 03:13:46.217847 | 2025-05-25 03:13:46.217972 | TASK [Point out that the log in on the manager is now possible] 2025-05-25 03:13:46.249270 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-25 03:13:46.256216 | 2025-05-25 03:13:46.256326 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-25 03:13:46.286293 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-25 03:13:46.294890 | 2025-05-25 03:13:46.295021 | TASK [Run manager part 1 + 2] 2025-05-25 03:13:47.120064 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-25 03:13:47.177105 | orchestrator | 2025-05-25 03:13:47.177163 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-25 03:13:47.177170 | orchestrator | 2025-05-25 03:13:47.177185 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 03:13:50.068955 | orchestrator | ok: [testbed-manager] 2025-05-25 03:13:50.069006 | orchestrator | 2025-05-25 03:13:50.069029 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-25 03:13:50.103236 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:13:50.103286 | orchestrator | 2025-05-25 03:13:50.103296 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-25 03:13:50.149608 | orchestrator | ok: [testbed-manager] 2025-05-25 03:13:50.149663 | orchestrator | 2025-05-25 03:13:50.149674 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-25 03:13:50.190213 | orchestrator | ok: [testbed-manager] 2025-05-25 03:13:50.190266 | orchestrator | 2025-05-25 03:13:50.190277 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-25 03:13:50.268620 | orchestrator | ok: [testbed-manager] 2025-05-25 03:13:50.268674 | orchestrator | 2025-05-25 03:13:50.268686 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-25 03:13:50.333458 | orchestrator | ok: [testbed-manager] 2025-05-25 03:13:50.333514 | orchestrator | 2025-05-25 03:13:50.333524 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-25 03:13:50.381670 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-25 03:13:50.381717 | orchestrator | 2025-05-25 03:13:50.381723 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-25 03:13:51.064380 | orchestrator | ok: [testbed-manager] 2025-05-25 03:13:51.064432 | orchestrator | 2025-05-25 03:13:51.064441 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-25 03:13:51.112234 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:13:51.112289 | orchestrator | 2025-05-25 03:13:51.112297 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-25 03:13:52.476495 | orchestrator | changed: [testbed-manager] 2025-05-25 03:13:52.476583 | orchestrator | 2025-05-25 03:13:52.476603 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-25 03:13:53.049474 | orchestrator | ok: [testbed-manager] 2025-05-25 03:13:53.049534 | orchestrator | 2025-05-25 03:13:53.049543 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-25 03:13:54.181906 | orchestrator | changed: [testbed-manager] 2025-05-25 03:13:54.182000 | orchestrator | 2025-05-25 03:13:54.182054 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-25 03:14:07.748864 | orchestrator | changed: [testbed-manager] 2025-05-25 03:14:07.748917 | orchestrator | 2025-05-25 03:14:07.748924 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-25 03:14:08.416578 | orchestrator | ok: [testbed-manager] 2025-05-25 03:14:08.416672 | orchestrator | 2025-05-25 03:14:08.416684 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-25 03:14:08.472402 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:14:08.472447 | orchestrator | 2025-05-25 03:14:08.472456 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-25 03:14:09.392546 | orchestrator | changed: [testbed-manager] 2025-05-25 03:14:09.392682 | orchestrator | 2025-05-25 03:14:09.392699 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-25 03:14:10.333318 | orchestrator | changed: [testbed-manager] 2025-05-25 03:14:10.333358 | orchestrator | 2025-05-25 03:14:10.333366 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-25 03:14:10.902564 | orchestrator | changed: [testbed-manager] 2025-05-25 03:14:10.902760 | orchestrator | 2025-05-25 03:14:10.902782 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-25 03:14:10.942324 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-25 03:14:10.942456 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-25 03:14:10.942481 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-25 03:14:10.942502 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-25 03:14:12.926222 | orchestrator | changed: [testbed-manager] 2025-05-25 03:14:12.926271 | orchestrator | 2025-05-25 03:14:12.926280 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-25 03:14:21.687876 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-25 03:14:21.687965 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-25 03:14:21.687985 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-25 03:14:21.687998 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-25 03:14:21.688018 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-25 03:14:21.688029 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-25 03:14:21.688040 | orchestrator | 2025-05-25 03:14:21.688052 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-25 03:14:22.694212 | orchestrator | changed: [testbed-manager] 2025-05-25 03:14:22.694337 | orchestrator | 2025-05-25 03:14:22.694355 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-25 03:14:22.737881 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:14:22.737929 | orchestrator | 2025-05-25 03:14:22.737941 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-25 03:14:25.864373 | orchestrator | changed: [testbed-manager] 2025-05-25 03:14:25.864468 | orchestrator | 2025-05-25 03:14:25.864486 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-25 03:14:25.906658 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:14:25.906731 | orchestrator | 2025-05-25 03:14:25.906745 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-25 03:15:59.311065 | orchestrator | changed: [testbed-manager] 2025-05-25 03:15:59.311203 | orchestrator | 2025-05-25 03:15:59.311224 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-25 03:16:00.427014 | orchestrator | ok: [testbed-manager] 2025-05-25 03:16:00.427103 | orchestrator | 2025-05-25 03:16:00.427120 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:16:00.427134 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-25 03:16:00.427146 | orchestrator | 2025-05-25 03:16:00.920345 | orchestrator | ok: Runtime: 0:02:13.936590 2025-05-25 03:16:00.928623 | 2025-05-25 03:16:00.928710 | TASK [Reboot manager] 2025-05-25 03:16:02.461296 | orchestrator | ok: Runtime: 0:00:01.061290 2025-05-25 03:16:02.478459 | 2025-05-25 03:16:02.478601 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-25 03:16:16.264403 | orchestrator | ok 2025-05-25 03:16:16.276471 | 2025-05-25 03:16:16.276601 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-25 03:17:16.312980 | orchestrator | ok 2025-05-25 03:17:16.320457 | 2025-05-25 03:17:16.320570 | TASK [Deploy manager + bootstrap nodes] 2025-05-25 03:17:18.756024 | orchestrator | 2025-05-25 03:17:18.756254 | orchestrator | # DEPLOY MANAGER 2025-05-25 03:17:18.756281 | orchestrator | 2025-05-25 03:17:18.756297 | orchestrator | + set -e 2025-05-25 03:17:18.756326 | orchestrator | + echo 2025-05-25 03:17:18.756350 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-25 03:17:18.756368 | orchestrator | + echo 2025-05-25 03:17:18.756419 | orchestrator | + cat /opt/manager-vars.sh 2025-05-25 03:17:18.759788 | orchestrator | export NUMBER_OF_NODES=6 2025-05-25 03:17:18.759813 | orchestrator | 2025-05-25 03:17:18.759825 | orchestrator | export CEPH_VERSION=reef 2025-05-25 03:17:18.759838 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-25 03:17:18.759850 | orchestrator | export MANAGER_VERSION=latest 2025-05-25 03:17:18.759872 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-25 03:17:18.759883 | orchestrator | 2025-05-25 03:17:18.759902 | orchestrator | export ARA=false 2025-05-25 03:17:18.759913 | orchestrator | export TEMPEST=true 2025-05-25 03:17:18.759930 | orchestrator | export IS_ZUUL=true 2025-05-25 03:17:18.759942 | orchestrator | 2025-05-25 03:17:18.759960 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.153 2025-05-25 03:17:18.759971 | orchestrator | export EXTERNAL_API=false 2025-05-25 03:17:18.759998 | orchestrator | 2025-05-25 03:17:18.760021 | orchestrator | export IMAGE_USER=ubuntu 2025-05-25 03:17:18.760032 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-25 03:17:18.760042 | orchestrator | 2025-05-25 03:17:18.760055 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-25 03:17:18.760103 | orchestrator | 2025-05-25 03:17:18.760115 | orchestrator | + echo 2025-05-25 03:17:18.760127 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-25 03:17:18.761045 | orchestrator | ++ export INTERACTIVE=false 2025-05-25 03:17:18.761065 | orchestrator | ++ INTERACTIVE=false 2025-05-25 03:17:18.761077 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-25 03:17:18.761089 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-25 03:17:18.761287 | orchestrator | + source /opt/manager-vars.sh 2025-05-25 03:17:18.761303 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-25 03:17:18.761314 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-25 03:17:18.761325 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-25 03:17:18.761336 | orchestrator | ++ CEPH_VERSION=reef 2025-05-25 03:17:18.761351 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-25 03:17:18.761362 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-25 03:17:18.761391 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-25 03:17:18.761403 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-25 03:17:18.761417 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-25 03:17:18.761428 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-25 03:17:18.761439 | orchestrator | ++ export ARA=false 2025-05-25 03:17:18.761458 | orchestrator | ++ ARA=false 2025-05-25 03:17:18.761485 | orchestrator | ++ export TEMPEST=true 2025-05-25 03:17:18.761496 | orchestrator | ++ TEMPEST=true 2025-05-25 03:17:18.761507 | orchestrator | ++ export IS_ZUUL=true 2025-05-25 03:17:18.761517 | orchestrator | ++ IS_ZUUL=true 2025-05-25 03:17:18.761528 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.153 2025-05-25 03:17:18.761543 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.153 2025-05-25 03:17:18.761553 | orchestrator | ++ export EXTERNAL_API=false 2025-05-25 03:17:18.761564 | orchestrator | ++ EXTERNAL_API=false 2025-05-25 03:17:18.761575 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-25 03:17:18.761604 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-25 03:17:18.761620 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-25 03:17:18.761631 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-25 03:17:18.761642 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-25 03:17:18.761653 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-25 03:17:18.761664 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-25 03:17:18.817903 | orchestrator | + docker version 2025-05-25 03:17:19.099084 | orchestrator | Client: Docker Engine - Community 2025-05-25 03:17:19.099176 | orchestrator | Version: 27.5.1 2025-05-25 03:17:19.099191 | orchestrator | API version: 1.47 2025-05-25 03:17:19.099200 | orchestrator | Go version: go1.22.11 2025-05-25 03:17:19.099208 | orchestrator | Git commit: 9f9e405 2025-05-25 03:17:19.099219 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-25 03:17:19.099228 | orchestrator | OS/Arch: linux/amd64 2025-05-25 03:17:19.099236 | orchestrator | Context: default 2025-05-25 03:17:19.099244 | orchestrator | 2025-05-25 03:17:19.099253 | orchestrator | Server: Docker Engine - Community 2025-05-25 03:17:19.099261 | orchestrator | Engine: 2025-05-25 03:17:19.099270 | orchestrator | Version: 27.5.1 2025-05-25 03:17:19.099278 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-05-25 03:17:19.099285 | orchestrator | Go version: go1.22.11 2025-05-25 03:17:19.099294 | orchestrator | Git commit: 4c9b3b0 2025-05-25 03:17:19.099327 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-25 03:17:19.099335 | orchestrator | OS/Arch: linux/amd64 2025-05-25 03:17:19.099343 | orchestrator | Experimental: false 2025-05-25 03:17:19.099351 | orchestrator | containerd: 2025-05-25 03:17:19.099359 | orchestrator | Version: 1.7.27 2025-05-25 03:17:19.099367 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-25 03:17:19.099375 | orchestrator | runc: 2025-05-25 03:17:19.099383 | orchestrator | Version: 1.2.5 2025-05-25 03:17:19.099391 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-25 03:17:19.099399 | orchestrator | docker-init: 2025-05-25 03:17:19.099407 | orchestrator | Version: 0.19.0 2025-05-25 03:17:19.099414 | orchestrator | GitCommit: de40ad0 2025-05-25 03:17:19.103124 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-25 03:17:19.112746 | orchestrator | + set -e 2025-05-25 03:17:19.112765 | orchestrator | + source /opt/manager-vars.sh 2025-05-25 03:17:19.112774 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-25 03:17:19.112782 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-25 03:17:19.112790 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-25 03:17:19.112798 | orchestrator | ++ CEPH_VERSION=reef 2025-05-25 03:17:19.112806 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-25 03:17:19.112814 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-25 03:17:19.112822 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-25 03:17:19.112830 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-25 03:17:19.112838 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-25 03:17:19.112845 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-25 03:17:19.112853 | orchestrator | ++ export ARA=false 2025-05-25 03:17:19.112861 | orchestrator | ++ ARA=false 2025-05-25 03:17:19.112869 | orchestrator | ++ export TEMPEST=true 2025-05-25 03:17:19.112877 | orchestrator | ++ TEMPEST=true 2025-05-25 03:17:19.112885 | orchestrator | ++ export IS_ZUUL=true 2025-05-25 03:17:19.112892 | orchestrator | ++ IS_ZUUL=true 2025-05-25 03:17:19.112901 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.153 2025-05-25 03:17:19.112909 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.153 2025-05-25 03:17:19.112921 | orchestrator | ++ export EXTERNAL_API=false 2025-05-25 03:17:19.112929 | orchestrator | ++ EXTERNAL_API=false 2025-05-25 03:17:19.112937 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-25 03:17:19.112944 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-25 03:17:19.112952 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-25 03:17:19.112960 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-25 03:17:19.112968 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-25 03:17:19.112975 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-25 03:17:19.113199 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-25 03:17:19.113212 | orchestrator | ++ export INTERACTIVE=false 2025-05-25 03:17:19.113225 | orchestrator | ++ INTERACTIVE=false 2025-05-25 03:17:19.113233 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-25 03:17:19.113241 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-25 03:17:19.113252 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-25 03:17:19.113260 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-25 03:17:19.113327 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-05-25 03:17:19.120590 | orchestrator | + set -e 2025-05-25 03:17:19.121180 | orchestrator | + VERSION=reef 2025-05-25 03:17:19.121702 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-05-25 03:17:19.128196 | orchestrator | + [[ -n ceph_version: reef ]] 2025-05-25 03:17:19.128214 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-05-25 03:17:19.133088 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-05-25 03:17:19.139635 | orchestrator | + set -e 2025-05-25 03:17:19.140161 | orchestrator | + VERSION=2024.2 2025-05-25 03:17:19.140711 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-05-25 03:17:19.144604 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-05-25 03:17:19.144619 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-05-25 03:17:19.150002 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-25 03:17:19.151103 | orchestrator | ++ semver latest 7.0.0 2025-05-25 03:17:19.210831 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-25 03:17:19.210886 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-25 03:17:19.210899 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-25 03:17:19.210911 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-25 03:17:19.251892 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-25 03:17:19.254450 | orchestrator | + source /opt/venv/bin/activate 2025-05-25 03:17:19.255696 | orchestrator | ++ deactivate nondestructive 2025-05-25 03:17:19.255721 | orchestrator | ++ '[' -n '' ']' 2025-05-25 03:17:19.255732 | orchestrator | ++ '[' -n '' ']' 2025-05-25 03:17:19.255743 | orchestrator | ++ hash -r 2025-05-25 03:17:19.255758 | orchestrator | ++ '[' -n '' ']' 2025-05-25 03:17:19.255770 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-25 03:17:19.255780 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-25 03:17:19.255791 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-25 03:17:19.255903 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-25 03:17:19.255927 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-25 03:17:19.255943 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-25 03:17:19.255955 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-25 03:17:19.255970 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-25 03:17:19.256137 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-25 03:17:19.256154 | orchestrator | ++ export PATH 2025-05-25 03:17:19.256370 | orchestrator | ++ '[' -n '' ']' 2025-05-25 03:17:19.256391 | orchestrator | ++ '[' -z '' ']' 2025-05-25 03:17:19.256402 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-25 03:17:19.256417 | orchestrator | ++ PS1='(venv) ' 2025-05-25 03:17:19.256428 | orchestrator | ++ export PS1 2025-05-25 03:17:19.256439 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-25 03:17:19.256450 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-25 03:17:19.256461 | orchestrator | ++ hash -r 2025-05-25 03:17:19.256597 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-25 03:17:20.457414 | orchestrator | 2025-05-25 03:17:20.457528 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-25 03:17:20.457545 | orchestrator | 2025-05-25 03:17:20.457581 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-25 03:17:21.022164 | orchestrator | ok: [testbed-manager] 2025-05-25 03:17:21.022281 | orchestrator | 2025-05-25 03:17:21.022299 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-25 03:17:22.005699 | orchestrator | changed: [testbed-manager] 2025-05-25 03:17:22.005811 | orchestrator | 2025-05-25 03:17:22.005826 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-25 03:17:22.005838 | orchestrator | 2025-05-25 03:17:22.005850 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 03:17:24.467474 | orchestrator | ok: [testbed-manager] 2025-05-25 03:17:24.467615 | orchestrator | 2025-05-25 03:17:24.467643 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-25 03:17:29.167640 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-25 03:17:29.167757 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.7.2) 2025-05-25 03:17:29.167773 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:reef) 2025-05-25 03:17:29.167788 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:latest) 2025-05-25 03:17:29.167801 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:2024.2) 2025-05-25 03:17:29.167812 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.3-alpine) 2025-05-25 03:17:29.167823 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.2.2) 2025-05-25 03:17:29.167834 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:latest) 2025-05-25 03:17:29.167845 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:latest) 2025-05-25 03:17:29.167856 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.9-alpine) 2025-05-25 03:17:29.167867 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.4.0) 2025-05-25 03:17:29.167878 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.19.3) 2025-05-25 03:17:29.167889 | orchestrator | 2025-05-25 03:17:29.167926 | orchestrator | TASK [Check status] ************************************************************ 2025-05-25 03:18:44.879894 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-25 03:18:44.880015 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-25 03:18:44.880030 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-25 03:18:44.880042 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-25 03:18:44.880069 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j155497836578.1545', 'results_file': '/home/dragon/.ansible_async/j155497836578.1545', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-25 03:18:44.880089 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j382740658477.1570', 'results_file': '/home/dragon/.ansible_async/j382740658477.1570', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.7.2', 'ansible_loop_var': 'item'}) 2025-05-25 03:18:44.880105 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-25 03:18:44.880116 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j810015504561.1595', 'results_file': '/home/dragon/.ansible_async/j810015504561.1595', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:reef', 'ansible_loop_var': 'item'}) 2025-05-25 03:18:44.880158 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j619239050412.1626', 'results_file': '/home/dragon/.ansible_async/j619239050412.1626', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:latest', 'ansible_loop_var': 'item'}) 2025-05-25 03:18:44.880184 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-25 03:18:44.880204 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j678782612827.1659', 'results_file': '/home/dragon/.ansible_async/j678782612827.1659', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:2024.2', 'ansible_loop_var': 'item'}) 2025-05-25 03:18:44.880216 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j984415840351.1691', 'results_file': '/home/dragon/.ansible_async/j984415840351.1691', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.3-alpine', 'ansible_loop_var': 'item'}) 2025-05-25 03:18:44.880227 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-25 03:18:44.880238 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j631395204371.1723', 'results_file': '/home/dragon/.ansible_async/j631395204371.1723', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.2.2', 'ansible_loop_var': 'item'}) 2025-05-25 03:18:44.880249 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j152321755386.1755', 'results_file': '/home/dragon/.ansible_async/j152321755386.1755', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:latest', 'ansible_loop_var': 'item'}) 2025-05-25 03:18:44.880261 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j814473698152.1793', 'results_file': '/home/dragon/.ansible_async/j814473698152.1793', 'changed': True, 'item': 'registry.osism.tech/osism/osism:latest', 'ansible_loop_var': 'item'}) 2025-05-25 03:18:44.880272 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j653652262751.1818', 'results_file': '/home/dragon/.ansible_async/j653652262751.1818', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.9-alpine', 'ansible_loop_var': 'item'}) 2025-05-25 03:18:44.880283 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j630407119980.1851', 'results_file': '/home/dragon/.ansible_async/j630407119980.1851', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.4.0', 'ansible_loop_var': 'item'}) 2025-05-25 03:18:44.880317 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j658538112199.1885', 'results_file': '/home/dragon/.ansible_async/j658538112199.1885', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.19.3', 'ansible_loop_var': 'item'}) 2025-05-25 03:18:44.880329 | orchestrator | 2025-05-25 03:18:44.880341 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-25 03:18:44.926232 | orchestrator | ok: [testbed-manager] 2025-05-25 03:18:44.926314 | orchestrator | 2025-05-25 03:18:44.926328 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-25 03:18:45.406761 | orchestrator | changed: [testbed-manager] 2025-05-25 03:18:45.406866 | orchestrator | 2025-05-25 03:18:45.406881 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-25 03:18:45.742824 | orchestrator | changed: [testbed-manager] 2025-05-25 03:18:45.742925 | orchestrator | 2025-05-25 03:18:45.742944 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-25 03:18:46.086086 | orchestrator | changed: [testbed-manager] 2025-05-25 03:18:46.086234 | orchestrator | 2025-05-25 03:18:46.086250 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-25 03:18:46.152853 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:18:46.152922 | orchestrator | 2025-05-25 03:18:46.152936 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-25 03:18:46.486658 | orchestrator | ok: [testbed-manager] 2025-05-25 03:18:46.486762 | orchestrator | 2025-05-25 03:18:46.486778 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-25 03:18:46.584023 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:18:46.584114 | orchestrator | 2025-05-25 03:18:46.584127 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-25 03:18:46.584187 | orchestrator | 2025-05-25 03:18:46.584200 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 03:18:48.353988 | orchestrator | ok: [testbed-manager] 2025-05-25 03:18:48.354100 | orchestrator | 2025-05-25 03:18:48.354107 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-25 03:18:48.461530 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-25 03:18:48.461627 | orchestrator | 2025-05-25 03:18:48.461643 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-25 03:18:48.518596 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-25 03:18:48.518693 | orchestrator | 2025-05-25 03:18:48.518708 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-25 03:18:49.572489 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-25 03:18:49.572572 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-25 03:18:49.572580 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-25 03:18:49.572590 | orchestrator | 2025-05-25 03:18:49.572598 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-25 03:18:51.375764 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-25 03:18:51.375882 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-25 03:18:51.375898 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-25 03:18:51.375912 | orchestrator | 2025-05-25 03:18:51.375946 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-25 03:18:52.028885 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 03:18:52.028991 | orchestrator | changed: [testbed-manager] 2025-05-25 03:18:52.029008 | orchestrator | 2025-05-25 03:18:52.029021 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-25 03:18:52.658717 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 03:18:52.658819 | orchestrator | changed: [testbed-manager] 2025-05-25 03:18:52.658860 | orchestrator | 2025-05-25 03:18:52.658874 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-25 03:18:52.717600 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:18:52.717673 | orchestrator | 2025-05-25 03:18:52.717687 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-25 03:18:53.066316 | orchestrator | ok: [testbed-manager] 2025-05-25 03:18:53.066434 | orchestrator | 2025-05-25 03:18:53.066457 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-25 03:18:53.129506 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-25 03:18:53.129596 | orchestrator | 2025-05-25 03:18:53.129609 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-25 03:18:54.178838 | orchestrator | changed: [testbed-manager] 2025-05-25 03:18:54.178943 | orchestrator | 2025-05-25 03:18:54.178959 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-25 03:18:55.018772 | orchestrator | changed: [testbed-manager] 2025-05-25 03:18:55.018880 | orchestrator | 2025-05-25 03:18:55.018896 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-25 03:18:58.532726 | orchestrator | changed: [testbed-manager] 2025-05-25 03:18:58.532836 | orchestrator | 2025-05-25 03:18:58.532853 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-25 03:18:58.637198 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-25 03:18:58.637294 | orchestrator | 2025-05-25 03:18:58.637308 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-25 03:18:58.718387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-25 03:18:58.718460 | orchestrator | 2025-05-25 03:18:58.718474 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-25 03:19:01.275962 | orchestrator | ok: [testbed-manager] 2025-05-25 03:19:01.276095 | orchestrator | 2025-05-25 03:19:01.276123 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-25 03:19:01.390314 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-25 03:19:01.390413 | orchestrator | 2025-05-25 03:19:01.390428 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-25 03:19:02.510099 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-25 03:19:02.510267 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-25 03:19:02.510284 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-25 03:19:02.510296 | orchestrator | 2025-05-25 03:19:02.510309 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-25 03:19:02.585747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-25 03:19:02.585824 | orchestrator | 2025-05-25 03:19:02.585837 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-25 03:19:03.219135 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-25 03:19:03.219281 | orchestrator | 2025-05-25 03:19:03.219298 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-25 03:19:03.855649 | orchestrator | changed: [testbed-manager] 2025-05-25 03:19:03.855756 | orchestrator | 2025-05-25 03:19:03.855773 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-25 03:19:04.498256 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 03:19:04.498369 | orchestrator | changed: [testbed-manager] 2025-05-25 03:19:04.498398 | orchestrator | 2025-05-25 03:19:04.498427 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-25 03:19:04.919587 | orchestrator | changed: [testbed-manager] 2025-05-25 03:19:04.919687 | orchestrator | 2025-05-25 03:19:04.919703 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-25 03:19:05.272874 | orchestrator | ok: [testbed-manager] 2025-05-25 03:19:05.272978 | orchestrator | 2025-05-25 03:19:05.273024 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-25 03:19:05.322297 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:19:05.322347 | orchestrator | 2025-05-25 03:19:05.322359 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-25 03:19:05.953512 | orchestrator | changed: [testbed-manager] 2025-05-25 03:19:05.953616 | orchestrator | 2025-05-25 03:19:05.953632 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-25 03:19:06.018395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-25 03:19:06.018484 | orchestrator | 2025-05-25 03:19:06.018498 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-25 03:19:06.781234 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-25 03:19:06.781350 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-25 03:19:06.781366 | orchestrator | 2025-05-25 03:19:06.781404 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-25 03:19:07.441484 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-25 03:19:07.441582 | orchestrator | 2025-05-25 03:19:07.441596 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-25 03:19:08.111022 | orchestrator | changed: [testbed-manager] 2025-05-25 03:19:08.111230 | orchestrator | 2025-05-25 03:19:08.111249 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-25 03:19:08.162248 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:19:08.162327 | orchestrator | 2025-05-25 03:19:08.162342 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-25 03:19:08.824900 | orchestrator | changed: [testbed-manager] 2025-05-25 03:19:08.825003 | orchestrator | 2025-05-25 03:19:08.825019 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-25 03:19:10.623292 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 03:19:10.623408 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 03:19:10.623423 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 03:19:10.623436 | orchestrator | changed: [testbed-manager] 2025-05-25 03:19:10.623449 | orchestrator | 2025-05-25 03:19:10.623461 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-25 03:19:16.509447 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-25 03:19:16.509568 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-25 03:19:16.509586 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-25 03:19:16.509599 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-25 03:19:16.509610 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-25 03:19:16.509621 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-25 03:19:16.509632 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-25 03:19:16.509643 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-25 03:19:16.509654 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-25 03:19:16.509665 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-25 03:19:16.509676 | orchestrator | 2025-05-25 03:19:16.509688 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-25 03:19:17.157468 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-25 03:19:17.157577 | orchestrator | 2025-05-25 03:19:17.157593 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-25 03:19:17.248465 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-25 03:19:17.248549 | orchestrator | 2025-05-25 03:19:17.248563 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-25 03:19:17.952013 | orchestrator | changed: [testbed-manager] 2025-05-25 03:19:17.952115 | orchestrator | 2025-05-25 03:19:17.952131 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-25 03:19:18.552457 | orchestrator | ok: [testbed-manager] 2025-05-25 03:19:18.552560 | orchestrator | 2025-05-25 03:19:18.552576 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-25 03:19:19.245609 | orchestrator | changed: [testbed-manager] 2025-05-25 03:19:19.245711 | orchestrator | 2025-05-25 03:19:19.245730 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-25 03:19:21.677843 | orchestrator | ok: [testbed-manager] 2025-05-25 03:19:21.677969 | orchestrator | 2025-05-25 03:19:21.677987 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-25 03:19:22.662964 | orchestrator | ok: [testbed-manager] 2025-05-25 03:19:22.663063 | orchestrator | 2025-05-25 03:19:22.663077 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-25 03:19:44.691082 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-25 03:19:44.691205 | orchestrator | ok: [testbed-manager] 2025-05-25 03:19:44.691222 | orchestrator | 2025-05-25 03:19:44.691314 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-25 03:19:44.740906 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:19:44.740998 | orchestrator | 2025-05-25 03:19:44.741011 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-25 03:19:44.741022 | orchestrator | 2025-05-25 03:19:44.741031 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-25 03:19:44.776849 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:19:44.776942 | orchestrator | 2025-05-25 03:19:44.776957 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-25 03:19:44.828429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-25 03:19:44.828511 | orchestrator | 2025-05-25 03:19:44.828525 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-25 03:19:45.578821 | orchestrator | ok: [testbed-manager] 2025-05-25 03:19:45.578922 | orchestrator | 2025-05-25 03:19:45.578936 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-25 03:19:45.638515 | orchestrator | ok: [testbed-manager] 2025-05-25 03:19:45.638583 | orchestrator | 2025-05-25 03:19:45.638597 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-25 03:19:45.689873 | orchestrator | ok: [testbed-manager] => { 2025-05-25 03:19:45.689980 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-25 03:19:45.689996 | orchestrator | } 2025-05-25 03:19:45.690008 | orchestrator | 2025-05-25 03:19:45.690100 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-25 03:19:46.242925 | orchestrator | ok: [testbed-manager] 2025-05-25 03:19:46.243030 | orchestrator | 2025-05-25 03:19:46.243046 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-25 03:19:47.012843 | orchestrator | ok: [testbed-manager] 2025-05-25 03:19:47.012944 | orchestrator | 2025-05-25 03:19:47.012960 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-25 03:19:47.082499 | orchestrator | ok: [testbed-manager] 2025-05-25 03:19:47.082665 | orchestrator | 2025-05-25 03:19:47.082685 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-25 03:19:47.132181 | orchestrator | ok: [testbed-manager] => { 2025-05-25 03:19:47.132285 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-25 03:19:47.132297 | orchestrator | } 2025-05-25 03:19:47.132306 | orchestrator | 2025-05-25 03:19:47.132315 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-25 03:19:47.192658 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:19:47.192729 | orchestrator | 2025-05-25 03:19:47.192735 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-25 03:19:47.245333 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:19:47.245376 | orchestrator | 2025-05-25 03:19:47.245384 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-25 03:19:47.296352 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:19:47.296412 | orchestrator | 2025-05-25 03:19:47.296422 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-25 03:19:47.352745 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:19:47.352835 | orchestrator | 2025-05-25 03:19:47.352850 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-25 03:19:47.467690 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:19:47.467789 | orchestrator | 2025-05-25 03:19:47.467806 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-25 03:19:47.523367 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:19:47.523444 | orchestrator | 2025-05-25 03:19:47.523458 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-25 03:19:48.880549 | orchestrator | changed: [testbed-manager] 2025-05-25 03:19:48.880650 | orchestrator | 2025-05-25 03:19:48.880666 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-25 03:19:48.952222 | orchestrator | ok: [testbed-manager] 2025-05-25 03:19:48.952374 | orchestrator | 2025-05-25 03:19:48.952401 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-25 03:20:49.017359 | orchestrator | Pausing for 60 seconds 2025-05-25 03:20:49.017480 | orchestrator | changed: [testbed-manager] 2025-05-25 03:20:49.017496 | orchestrator | 2025-05-25 03:20:49.017509 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-25 03:20:49.076203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-25 03:20:49.076281 | orchestrator | 2025-05-25 03:20:49.076296 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-25 03:24:29.025037 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-25 03:24:29.025156 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-25 03:24:29.025173 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-25 03:24:29.025185 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-25 03:24:29.025196 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-25 03:24:29.025207 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-25 03:24:29.025218 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-25 03:24:29.025229 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-25 03:24:29.025239 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-25 03:24:29.025250 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-25 03:24:29.025261 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-25 03:24:29.025272 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-25 03:24:29.025283 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-25 03:24:29.025293 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-25 03:24:29.025304 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-25 03:24:29.025315 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-25 03:24:29.025325 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-25 03:24:29.025357 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-25 03:24:29.025369 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-25 03:24:29.025380 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-25 03:24:29.025414 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-25 03:24:29.025427 | orchestrator | changed: [testbed-manager] 2025-05-25 03:24:29.025440 | orchestrator | 2025-05-25 03:24:29.025451 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-25 03:24:29.025462 | orchestrator | 2025-05-25 03:24:29.025473 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 03:24:30.989271 | orchestrator | ok: [testbed-manager] 2025-05-25 03:24:30.989388 | orchestrator | 2025-05-25 03:24:30.989415 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-25 03:24:31.100793 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-25 03:24:31.100899 | orchestrator | 2025-05-25 03:24:31.100914 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-25 03:24:31.158525 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-25 03:24:31.158641 | orchestrator | 2025-05-25 03:24:31.158657 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-25 03:24:32.985679 | orchestrator | ok: [testbed-manager] 2025-05-25 03:24:32.985801 | orchestrator | 2025-05-25 03:24:32.985821 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-25 03:24:33.044727 | orchestrator | ok: [testbed-manager] 2025-05-25 03:24:33.044845 | orchestrator | 2025-05-25 03:24:33.044871 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-25 03:24:33.136936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-25 03:24:33.137030 | orchestrator | 2025-05-25 03:24:33.137045 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-25 03:24:35.996452 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-25 03:24:35.996563 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-25 03:24:35.996578 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-25 03:24:35.996590 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-25 03:24:35.996602 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-25 03:24:35.996613 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-25 03:24:35.996685 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-25 03:24:35.996703 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-25 03:24:35.996715 | orchestrator | 2025-05-25 03:24:35.996728 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-25 03:24:36.643060 | orchestrator | changed: [testbed-manager] 2025-05-25 03:24:36.643165 | orchestrator | 2025-05-25 03:24:36.643181 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-25 03:24:36.733864 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-25 03:24:36.733977 | orchestrator | 2025-05-25 03:24:36.734005 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-25 03:24:37.938493 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-25 03:24:37.938599 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-25 03:24:37.938615 | orchestrator | 2025-05-25 03:24:37.938685 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-25 03:24:38.579010 | orchestrator | changed: [testbed-manager] 2025-05-25 03:24:38.579097 | orchestrator | 2025-05-25 03:24:38.579113 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-25 03:24:38.635802 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:24:38.635898 | orchestrator | 2025-05-25 03:24:38.635912 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-25 03:24:38.694347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-25 03:24:38.694461 | orchestrator | 2025-05-25 03:24:38.694477 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-25 03:24:40.074810 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 03:24:40.074919 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 03:24:40.074934 | orchestrator | changed: [testbed-manager] 2025-05-25 03:24:40.074947 | orchestrator | 2025-05-25 03:24:40.074959 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-25 03:24:40.768503 | orchestrator | changed: [testbed-manager] 2025-05-25 03:24:40.768609 | orchestrator | 2025-05-25 03:24:40.768734 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-25 03:24:40.867273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-25 03:24:40.867369 | orchestrator | 2025-05-25 03:24:40.867384 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-25 03:24:42.071460 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 03:24:42.071568 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 03:24:42.071583 | orchestrator | changed: [testbed-manager] 2025-05-25 03:24:42.071597 | orchestrator | 2025-05-25 03:24:42.071610 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-25 03:24:42.725573 | orchestrator | changed: [testbed-manager] 2025-05-25 03:24:42.725746 | orchestrator | 2025-05-25 03:24:42.725773 | orchestrator | TASK [osism.services.manager : Copy inventory-reconciler environment file] ***** 2025-05-25 03:24:43.380064 | orchestrator | changed: [testbed-manager] 2025-05-25 03:24:43.380168 | orchestrator | 2025-05-25 03:24:43.380184 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-25 03:24:43.526275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-25 03:24:43.526374 | orchestrator | 2025-05-25 03:24:43.526388 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-25 03:24:44.076691 | orchestrator | changed: [testbed-manager] 2025-05-25 03:24:44.076790 | orchestrator | 2025-05-25 03:24:44.076805 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-25 03:24:44.487449 | orchestrator | changed: [testbed-manager] 2025-05-25 03:24:44.487575 | orchestrator | 2025-05-25 03:24:44.487591 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-25 03:24:45.755063 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-25 03:24:45.755170 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-25 03:24:45.755186 | orchestrator | 2025-05-25 03:24:45.755199 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-25 03:24:46.391024 | orchestrator | changed: [testbed-manager] 2025-05-25 03:24:46.391132 | orchestrator | 2025-05-25 03:24:46.391148 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-25 03:24:46.772248 | orchestrator | ok: [testbed-manager] 2025-05-25 03:24:46.772345 | orchestrator | 2025-05-25 03:24:46.772361 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-25 03:24:47.142264 | orchestrator | changed: [testbed-manager] 2025-05-25 03:24:47.142382 | orchestrator | 2025-05-25 03:24:47.142398 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-25 03:24:47.192244 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:24:47.192329 | orchestrator | 2025-05-25 03:24:47.192338 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-25 03:24:47.272174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-25 03:24:47.272252 | orchestrator | 2025-05-25 03:24:47.272264 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-25 03:24:47.321309 | orchestrator | ok: [testbed-manager] 2025-05-25 03:24:47.321389 | orchestrator | 2025-05-25 03:24:47.321404 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-25 03:24:49.356997 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-25 03:24:49.357106 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-25 03:24:49.357151 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-25 03:24:49.357164 | orchestrator | 2025-05-25 03:24:49.357176 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-25 03:24:50.077933 | orchestrator | changed: [testbed-manager] 2025-05-25 03:24:50.078087 | orchestrator | 2025-05-25 03:24:50.078104 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-25 03:24:50.811473 | orchestrator | changed: [testbed-manager] 2025-05-25 03:24:50.811577 | orchestrator | 2025-05-25 03:24:50.811593 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-25 03:24:51.538341 | orchestrator | changed: [testbed-manager] 2025-05-25 03:24:51.538437 | orchestrator | 2025-05-25 03:24:51.538450 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-25 03:24:51.626895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-25 03:24:51.626995 | orchestrator | 2025-05-25 03:24:51.627009 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-25 03:24:51.675829 | orchestrator | ok: [testbed-manager] 2025-05-25 03:24:51.675906 | orchestrator | 2025-05-25 03:24:51.675920 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-25 03:24:52.380748 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-25 03:24:52.380856 | orchestrator | 2025-05-25 03:24:52.380872 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-25 03:24:52.475500 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-25 03:24:52.475595 | orchestrator | 2025-05-25 03:24:52.475610 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-25 03:24:53.194315 | orchestrator | changed: [testbed-manager] 2025-05-25 03:24:53.194417 | orchestrator | 2025-05-25 03:24:53.194431 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-25 03:24:53.821052 | orchestrator | ok: [testbed-manager] 2025-05-25 03:24:53.821157 | orchestrator | 2025-05-25 03:24:53.821172 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-25 03:24:53.881813 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:24:53.881930 | orchestrator | 2025-05-25 03:24:53.881955 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-25 03:24:53.943057 | orchestrator | ok: [testbed-manager] 2025-05-25 03:24:53.943173 | orchestrator | 2025-05-25 03:24:53.943197 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-25 03:24:54.803159 | orchestrator | changed: [testbed-manager] 2025-05-25 03:24:54.803262 | orchestrator | 2025-05-25 03:24:54.803278 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-25 03:25:41.025563 | orchestrator | changed: [testbed-manager] 2025-05-25 03:25:41.025726 | orchestrator | 2025-05-25 03:25:41.025746 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-25 03:25:41.682290 | orchestrator | ok: [testbed-manager] 2025-05-25 03:25:41.682388 | orchestrator | 2025-05-25 03:25:41.682404 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-25 03:25:44.484054 | orchestrator | changed: [testbed-manager] 2025-05-25 03:25:44.484159 | orchestrator | 2025-05-25 03:25:44.484175 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-25 03:25:44.541876 | orchestrator | ok: [testbed-manager] 2025-05-25 03:25:44.541966 | orchestrator | 2025-05-25 03:25:44.541981 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-25 03:25:44.541993 | orchestrator | 2025-05-25 03:25:44.542005 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-25 03:25:44.606699 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:25:44.606801 | orchestrator | 2025-05-25 03:25:44.606816 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-25 03:26:44.669151 | orchestrator | Pausing for 60 seconds 2025-05-25 03:26:44.669271 | orchestrator | changed: [testbed-manager] 2025-05-25 03:26:44.669286 | orchestrator | 2025-05-25 03:26:44.669325 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-25 03:26:48.622702 | orchestrator | changed: [testbed-manager] 2025-05-25 03:26:48.622885 | orchestrator | 2025-05-25 03:26:48.622916 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-25 03:27:30.259993 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-25 03:27:30.260113 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-25 03:27:30.260131 | orchestrator | changed: [testbed-manager] 2025-05-25 03:27:30.260145 | orchestrator | 2025-05-25 03:27:30.260157 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-25 03:27:39.007214 | orchestrator | changed: [testbed-manager] 2025-05-25 03:27:39.007330 | orchestrator | 2025-05-25 03:27:39.007348 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-25 03:27:39.104605 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-25 03:27:39.104696 | orchestrator | 2025-05-25 03:27:39.104710 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-25 03:27:39.104723 | orchestrator | 2025-05-25 03:27:39.104734 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-25 03:27:39.159462 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:27:39.159523 | orchestrator | 2025-05-25 03:27:39.159536 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:27:39.159548 | orchestrator | testbed-manager : ok=110 changed=58 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-25 03:27:39.159560 | orchestrator | 2025-05-25 03:27:39.277011 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-25 03:27:39.277081 | orchestrator | + deactivate 2025-05-25 03:27:39.277094 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-25 03:27:39.277108 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-25 03:27:39.277119 | orchestrator | + export PATH 2025-05-25 03:27:39.277131 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-25 03:27:39.277142 | orchestrator | + '[' -n '' ']' 2025-05-25 03:27:39.277153 | orchestrator | + hash -r 2025-05-25 03:27:39.277164 | orchestrator | + '[' -n '' ']' 2025-05-25 03:27:39.277175 | orchestrator | + unset VIRTUAL_ENV 2025-05-25 03:27:39.277185 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-25 03:27:39.277197 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-25 03:27:39.277209 | orchestrator | + unset -f deactivate 2025-05-25 03:27:39.277221 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-25 03:27:39.284791 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-25 03:27:39.284846 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-25 03:27:39.284861 | orchestrator | + local max_attempts=60 2025-05-25 03:27:39.284874 | orchestrator | + local name=ceph-ansible 2025-05-25 03:27:39.284886 | orchestrator | + local attempt_num=1 2025-05-25 03:27:39.286003 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-25 03:27:39.322342 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-25 03:27:39.322407 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-25 03:27:39.322422 | orchestrator | + local max_attempts=60 2025-05-25 03:27:39.322435 | orchestrator | + local name=kolla-ansible 2025-05-25 03:27:39.322447 | orchestrator | + local attempt_num=1 2025-05-25 03:27:39.323217 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-25 03:27:39.356874 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-25 03:27:39.356927 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-25 03:27:39.356939 | orchestrator | + local max_attempts=60 2025-05-25 03:27:39.356950 | orchestrator | + local name=osism-ansible 2025-05-25 03:27:39.356961 | orchestrator | + local attempt_num=1 2025-05-25 03:27:39.357663 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-25 03:27:39.391958 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-25 03:27:39.392016 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-25 03:27:39.392031 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-25 03:27:40.101227 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-25 03:27:40.287019 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-25 03:27:40.287118 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-25 03:27:40.287133 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-25 03:27:40.287145 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-25 03:27:40.287159 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-25 03:27:40.287170 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-05-25 03:27:40.287181 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" conductor About a minute ago Up About a minute (healthy) 2025-05-25 03:27:40.287191 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-05-25 03:27:40.287202 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2025-05-25 03:27:40.287213 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-05-25 03:27:40.287224 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-25 03:27:40.287235 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" netbox About a minute ago Up About a minute (healthy) 2025-05-25 03:27:40.287246 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-05-25 03:27:40.287256 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-25 03:27:40.287267 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" watchdog About a minute ago Up About a minute (healthy) 2025-05-25 03:27:40.287278 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-25 03:27:40.287289 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-25 03:27:40.287299 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-05-25 03:27:40.295056 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-25 03:27:40.444922 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-25 03:27:40.445017 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.2.2 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-05-25 03:27:40.445054 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.2.2 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-25 03:27:40.445068 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.9-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 7 minutes (healthy) 5432/tcp 2025-05-25 03:27:40.445082 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 7 minutes (healthy) 6379/tcp 2025-05-25 03:27:40.456449 | orchestrator | ++ semver latest 7.0.0 2025-05-25 03:27:40.518337 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-25 03:27:40.518413 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-25 03:27:40.518428 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-25 03:27:40.523206 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-25 03:27:42.426094 | orchestrator | 2025-05-25 03:27:42 | INFO  | Task f0c4520e-40c9-4f02-bd3c-c9a1a049e64a (resolvconf) was prepared for execution. 2025-05-25 03:27:42.426199 | orchestrator | 2025-05-25 03:27:42 | INFO  | It takes a moment until task f0c4520e-40c9-4f02-bd3c-c9a1a049e64a (resolvconf) has been started and output is visible here. 2025-05-25 03:27:46.302312 | orchestrator | 2025-05-25 03:27:46.302435 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-25 03:27:46.302453 | orchestrator | 2025-05-25 03:27:46.302466 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 03:27:46.304855 | orchestrator | Sunday 25 May 2025 03:27:46 +0000 (0:00:00.153) 0:00:00.153 ************ 2025-05-25 03:27:50.292677 | orchestrator | ok: [testbed-manager] 2025-05-25 03:27:50.292852 | orchestrator | 2025-05-25 03:27:50.293918 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-25 03:27:50.294902 | orchestrator | Sunday 25 May 2025 03:27:50 +0000 (0:00:03.996) 0:00:04.149 ************ 2025-05-25 03:27:50.359120 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:27:50.359562 | orchestrator | 2025-05-25 03:27:50.362165 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-25 03:27:50.362865 | orchestrator | Sunday 25 May 2025 03:27:50 +0000 (0:00:00.067) 0:00:04.217 ************ 2025-05-25 03:27:50.443133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-25 03:27:50.443219 | orchestrator | 2025-05-25 03:27:50.443233 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-25 03:27:50.443246 | orchestrator | Sunday 25 May 2025 03:27:50 +0000 (0:00:00.080) 0:00:04.297 ************ 2025-05-25 03:27:50.517103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-25 03:27:50.518675 | orchestrator | 2025-05-25 03:27:50.519371 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-25 03:27:50.519866 | orchestrator | Sunday 25 May 2025 03:27:50 +0000 (0:00:00.077) 0:00:04.374 ************ 2025-05-25 03:27:51.591231 | orchestrator | ok: [testbed-manager] 2025-05-25 03:27:51.591959 | orchestrator | 2025-05-25 03:27:51.593007 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-25 03:27:51.594390 | orchestrator | Sunday 25 May 2025 03:27:51 +0000 (0:00:01.073) 0:00:05.447 ************ 2025-05-25 03:27:51.654370 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:27:51.655050 | orchestrator | 2025-05-25 03:27:51.655416 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-25 03:27:51.656412 | orchestrator | Sunday 25 May 2025 03:27:51 +0000 (0:00:00.064) 0:00:05.512 ************ 2025-05-25 03:27:52.125553 | orchestrator | ok: [testbed-manager] 2025-05-25 03:27:52.125714 | orchestrator | 2025-05-25 03:27:52.127053 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-25 03:27:52.128085 | orchestrator | Sunday 25 May 2025 03:27:52 +0000 (0:00:00.470) 0:00:05.983 ************ 2025-05-25 03:27:52.207050 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:27:52.207200 | orchestrator | 2025-05-25 03:27:52.208660 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-25 03:27:52.209602 | orchestrator | Sunday 25 May 2025 03:27:52 +0000 (0:00:00.080) 0:00:06.063 ************ 2025-05-25 03:27:52.783888 | orchestrator | changed: [testbed-manager] 2025-05-25 03:27:52.784277 | orchestrator | 2025-05-25 03:27:52.785392 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-25 03:27:52.786328 | orchestrator | Sunday 25 May 2025 03:27:52 +0000 (0:00:00.575) 0:00:06.639 ************ 2025-05-25 03:27:53.899699 | orchestrator | changed: [testbed-manager] 2025-05-25 03:27:53.900739 | orchestrator | 2025-05-25 03:27:53.901160 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-25 03:27:53.902263 | orchestrator | Sunday 25 May 2025 03:27:53 +0000 (0:00:01.115) 0:00:07.755 ************ 2025-05-25 03:27:54.881083 | orchestrator | ok: [testbed-manager] 2025-05-25 03:27:54.881190 | orchestrator | 2025-05-25 03:27:54.881206 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-25 03:27:54.881219 | orchestrator | Sunday 25 May 2025 03:27:54 +0000 (0:00:00.981) 0:00:08.736 ************ 2025-05-25 03:27:54.978520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-25 03:27:54.978833 | orchestrator | 2025-05-25 03:27:54.979650 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-25 03:27:54.980528 | orchestrator | Sunday 25 May 2025 03:27:54 +0000 (0:00:00.099) 0:00:08.836 ************ 2025-05-25 03:27:56.120513 | orchestrator | changed: [testbed-manager] 2025-05-25 03:27:56.120623 | orchestrator | 2025-05-25 03:27:56.120639 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:27:56.120653 | orchestrator | 2025-05-25 03:27:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:27:56.120666 | orchestrator | 2025-05-25 03:27:56 | INFO  | Please wait and do not abort execution. 2025-05-25 03:27:56.121462 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 03:27:56.122231 | orchestrator | 2025-05-25 03:27:56.123090 | orchestrator | 2025-05-25 03:27:56.123562 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:27:56.124508 | orchestrator | Sunday 25 May 2025 03:27:56 +0000 (0:00:01.140) 0:00:09.976 ************ 2025-05-25 03:27:56.125704 | orchestrator | =============================================================================== 2025-05-25 03:27:56.127357 | orchestrator | Gathering Facts --------------------------------------------------------- 4.00s 2025-05-25 03:27:56.127457 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.14s 2025-05-25 03:27:56.128502 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.12s 2025-05-25 03:27:56.128728 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.07s 2025-05-25 03:27:56.129392 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.98s 2025-05-25 03:27:56.130119 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.58s 2025-05-25 03:27:56.131046 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.47s 2025-05-25 03:27:56.131369 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2025-05-25 03:27:56.132056 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-05-25 03:27:56.132789 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-05-25 03:27:56.133061 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-05-25 03:27:56.133683 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-05-25 03:27:56.134000 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-05-25 03:27:56.553824 | orchestrator | + osism apply sshconfig 2025-05-25 03:27:58.262880 | orchestrator | 2025-05-25 03:27:58 | INFO  | Task 2860db3c-5725-4523-90ee-23f10a591212 (sshconfig) was prepared for execution. 2025-05-25 03:27:58.262989 | orchestrator | 2025-05-25 03:27:58 | INFO  | It takes a moment until task 2860db3c-5725-4523-90ee-23f10a591212 (sshconfig) has been started and output is visible here. 2025-05-25 03:28:02.081480 | orchestrator | 2025-05-25 03:28:02.081595 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-25 03:28:02.082082 | orchestrator | 2025-05-25 03:28:02.082590 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-25 03:28:02.084186 | orchestrator | Sunday 25 May 2025 03:28:02 +0000 (0:00:00.127) 0:00:00.127 ************ 2025-05-25 03:28:02.596119 | orchestrator | ok: [testbed-manager] 2025-05-25 03:28:02.596227 | orchestrator | 2025-05-25 03:28:02.596789 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-25 03:28:02.597942 | orchestrator | Sunday 25 May 2025 03:28:02 +0000 (0:00:00.517) 0:00:00.645 ************ 2025-05-25 03:28:03.050433 | orchestrator | changed: [testbed-manager] 2025-05-25 03:28:03.052388 | orchestrator | 2025-05-25 03:28:03.052424 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-25 03:28:03.053685 | orchestrator | Sunday 25 May 2025 03:28:03 +0000 (0:00:00.454) 0:00:01.100 ************ 2025-05-25 03:28:08.349302 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-25 03:28:08.349831 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-25 03:28:08.350978 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-25 03:28:08.352638 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-25 03:28:08.353754 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-25 03:28:08.354497 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-25 03:28:08.355198 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-25 03:28:08.356087 | orchestrator | 2025-05-25 03:28:08.356882 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-25 03:28:08.357513 | orchestrator | Sunday 25 May 2025 03:28:08 +0000 (0:00:05.295) 0:00:06.395 ************ 2025-05-25 03:28:08.425171 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:28:08.425247 | orchestrator | 2025-05-25 03:28:08.426092 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-25 03:28:08.426700 | orchestrator | Sunday 25 May 2025 03:28:08 +0000 (0:00:00.076) 0:00:06.471 ************ 2025-05-25 03:28:08.998682 | orchestrator | changed: [testbed-manager] 2025-05-25 03:28:08.999250 | orchestrator | 2025-05-25 03:28:09.000241 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:28:09.000947 | orchestrator | 2025-05-25 03:28:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:28:09.000971 | orchestrator | 2025-05-25 03:28:08 | INFO  | Please wait and do not abort execution. 2025-05-25 03:28:09.002466 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 03:28:09.003029 | orchestrator | 2025-05-25 03:28:09.004094 | orchestrator | 2025-05-25 03:28:09.004821 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:28:09.005600 | orchestrator | Sunday 25 May 2025 03:28:08 +0000 (0:00:00.576) 0:00:07.048 ************ 2025-05-25 03:28:09.006316 | orchestrator | =============================================================================== 2025-05-25 03:28:09.007458 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.30s 2025-05-25 03:28:09.008853 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2025-05-25 03:28:09.009688 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.52s 2025-05-25 03:28:09.010379 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.45s 2025-05-25 03:28:09.011544 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-05-25 03:28:09.424262 | orchestrator | + osism apply known-hosts 2025-05-25 03:28:11.114673 | orchestrator | 2025-05-25 03:28:11 | INFO  | Task ee6f75ae-2b9a-408a-ba81-4ac388dd6fd1 (known-hosts) was prepared for execution. 2025-05-25 03:28:11.114834 | orchestrator | 2025-05-25 03:28:11 | INFO  | It takes a moment until task ee6f75ae-2b9a-408a-ba81-4ac388dd6fd1 (known-hosts) has been started and output is visible here. 2025-05-25 03:28:14.992659 | orchestrator | 2025-05-25 03:28:14.994336 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-25 03:28:14.995763 | orchestrator | 2025-05-25 03:28:14.996704 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-25 03:28:14.997452 | orchestrator | Sunday 25 May 2025 03:28:14 +0000 (0:00:00.165) 0:00:00.165 ************ 2025-05-25 03:28:20.991606 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-25 03:28:20.991744 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-25 03:28:20.994621 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-25 03:28:20.996074 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-25 03:28:20.996994 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-25 03:28:20.998123 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-25 03:28:20.998739 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-25 03:28:20.999373 | orchestrator | 2025-05-25 03:28:21.000107 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-25 03:28:21.000670 | orchestrator | Sunday 25 May 2025 03:28:20 +0000 (0:00:06.000) 0:00:06.165 ************ 2025-05-25 03:28:21.154323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-25 03:28:21.155429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-25 03:28:21.156985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-25 03:28:21.157532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-25 03:28:21.158660 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-25 03:28:21.159354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-25 03:28:21.160013 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-25 03:28:21.161405 | orchestrator | 2025-05-25 03:28:21.162184 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 03:28:21.162947 | orchestrator | Sunday 25 May 2025 03:28:21 +0000 (0:00:00.162) 0:00:06.328 ************ 2025-05-25 03:28:22.321475 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDo7kYrsr0Huw2c9/76EGPfc98JHr4F7mYBytXPLoITAljwVID/+kgwar2Q+IJqxyfVcR2/q8iy00vzj+nMJ/qWZ+5hKZGDsiagP3WXlZz+X5mFsyk+JsvDgv4UcBq54ocm/EcFSuF52rX/xJ9drbLGUeMgIjpD7ZAVc/uGH5saBD3BVT3lJcnrWHDB6t1foubwZqNRYl53wZVWl+ugYRdikNRUlsRSrnpdJvCtlT5yp7ZeswO2tDOVMKaIptxZjOuy13rhItacFOzDPZmm2MlmTCuI0dIepgMlkh6vpzPVp0SKXIxS+74IRK2JQTzzEfJKdbhXaO+bNzFXrejnK7qJXdpAaTcV8d+NIKygymO2DaB835LqGEoLlKHCFq1kDvBARl7rGdW4T5qiW2Wx4YGgaQQ8aOVwiyZEf8Vzn073p+mczVYJsJwX4X0s8f529/0Sw+9h+8+OV9I0/nLsxz/bIfGXK90vplyjxa1kzNn3+psrhfW74RifhnyS4twnJRc=) 2025-05-25 03:28:22.323661 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBIyDqymZ4zWssUk5AHED0Dq7EHeGJrdCUYMIG2MFVp5bn1o5zSI5tqkAyEWthgR8oAVFuFA1ex7EFRTez89wsM=) 2025-05-25 03:28:22.323698 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKsnj8w0wMR9EYSud726adEUjctT9D+w/90EbjhbP6th) 2025-05-25 03:28:22.324046 | orchestrator | 2025-05-25 03:28:22.325275 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 03:28:22.325962 | orchestrator | Sunday 25 May 2025 03:28:22 +0000 (0:00:01.167) 0:00:07.495 ************ 2025-05-25 03:28:23.412660 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPaYfZAwTqsGtkjkUTNjoEIb5xD6yOusSBl5h4BIvg8D) 2025-05-25 03:28:23.413266 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVNB2HDFPIXvWcEfdBlZ+CqcfjUnJSOpB0keXMdr21wv+6NBgpX9byCHNzAFm+ycG47N57sEcr7DZbXRT44Nc7KQ+nxpmjYCXVMEeBbepGx60/gp2p/8aUcgO+z1CG6ZwiQyDnCNKcRDe5z5AatbCqVZiDvmUrjYBYGiH2OVHKe00mooxkm1UWAJ/lv63ARNWQUUZa61vTJ7C9k9zOZJtCcJLF7zeZltUYpvOgrJ79VfUClF6YDmA/y2Y4RF1eiSPy7yl4p9Zd3G2kSiadjYIxQ2ZUcI5tkOg2ZkfEbmgs0zHw8RN5YWJ5Iy7aEDtDrjSSTAEFuUlreWxNgND2rM2NPay48213Am8Cdx6TjbWcE32o41S2l8rbYil/w9l2FHVOREeokQLSzyaeqx/1k4X2VvXePGqc6kw0k9EE6dZrtNx2bR+kuLAyBlfhz2xo1HxeGk2psMc+fnUo44iIAKx4ZsieLqHNL1r5tkZv6YSu8VDt+PcI282flniC76bqsyM=) 2025-05-25 03:28:23.416037 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEZ8RtiYkNTn3r7GiI09ZDXkFXmYgSU3gjJdHR/HJZz0tSDtx+fHqPM9JR0ocfy/F+oiWGkdrezRtIGL/X2AZ/I=) 2025-05-25 03:28:23.416695 | orchestrator | 2025-05-25 03:28:23.417393 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 03:28:23.418197 | orchestrator | Sunday 25 May 2025 03:28:23 +0000 (0:00:01.091) 0:00:08.587 ************ 2025-05-25 03:28:24.473612 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP5REx7T0gm3arYLRULaMPR0+cK0j0xqyxHHezAfGUhTPxENtbhSkWeAzGwpENFJNlbc4cpSUIMaq8FBetk8dCw=) 2025-05-25 03:28:24.473732 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAG+3TyPj3UNHQVtyvUF3Lep99sQuAOEQRTwFhNZ8yLjP/VFGgJu8mIpwxpypwGL87OfGkKGD4VQ9J9B+K+v+cAzOH/CbIOSf41B+xcUvqYWyml4b16/AHue+6AVF/1RtGSOKp7M1CtUjoXqGtB88Ev6DzZzgZU1YIlSmU66NG/7L+D3LOvr8dx3MDgc9L0H0OXvP8/0zZSd0uIxqBwgu5LU2Mocb+b44aJsDIU7MDqyRq7ilifLtHoZMkb0eOuyS8AVQxCUqxi1QaR71MKY/iIrtOoYIBM4obaSknRF7EfJQw9DyC4KlV2x1PlMrfKMzfI01PMT4B2Oi1qaK3NbdO1zBsw7aDYj5suuZUxsdDd7d1FD0gzZZl+AXuLZ6igX/Dlq23dyJl+C+T7qIO5BR03QUL2+PfxqWahYRB3fc7s52h/wmKU7Vx+Y00vYFAy6qhvoeT+pVxC/aqRqbzCRiqaY4A/jctXdYqI7djXn+4RhzBLWUjooMNzWpHO/pWB88=) 2025-05-25 03:28:24.474330 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB7sdL4/NdznBzhaOTZpoPLZ31nbzVn6hDAQkvTjZwTr) 2025-05-25 03:28:24.474961 | orchestrator | 2025-05-25 03:28:24.475270 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 03:28:24.475984 | orchestrator | Sunday 25 May 2025 03:28:24 +0000 (0:00:01.059) 0:00:09.646 ************ 2025-05-25 03:28:25.496079 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+U0wlxEUQbWMJmUOY1SbRqrpLA+gmGRXkkqogYPdlbQ3YkVyDibZpz9GSg9WtJRl/1bQhBE6JWx5wUEh4oxG1dmiD13jgz2KQA6DZx9nKNW0Ehino4oNbfEeUxlkdFglqg8kYl1c0UlN12i3noy8/fTDIT4GTKGhKkZzgHouaA8LLPeHCleTKXgCiVrtyAVBAWG0aqOk5I2TbUR55pdVXqi0QwTpSobKBfk00qrE8DWfrsEecyY2XxEt7JyhXlgGdO9wgX8hqOclluQGofqze+kX/oMI8sbp6WyIlCEULQs92gMPMBw54VNOnH/J/LKMUmyt2YJOQD87uD1s+nUvYyqzVxHLipZibFh8bxW4nVJx9d39Q+dEp2czTzOt9DCqPfqQfwVTUY1a6266Qk2MRx4i5OMMLbrR6NnhLWNPt8Re8adW7sVOn6aSsNpaJptfM5S2DkhdzSUWJ7d1fL9Bf03UsJnUC/NcLChtxrFHDJXtRtTBN7H7mo8u7E800zCs=) 2025-05-25 03:28:25.496153 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE3uGg5lMRrN+fLNw7Zc2CpFjE71bg1ucb7asbxa3Kcw) 2025-05-25 03:28:25.496764 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDE900ulEMfKPUIcDV5KlIepa1z/RQCVFC9juZ8K+cihYlnr3fE6Avu4H1FPvhxPhzAk1PJpzJ8q9mhhtvZs+Us=) 2025-05-25 03:28:25.497473 | orchestrator | 2025-05-25 03:28:25.498312 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 03:28:25.498668 | orchestrator | Sunday 25 May 2025 03:28:25 +0000 (0:00:01.022) 0:00:10.669 ************ 2025-05-25 03:28:26.527357 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQ2v3BN8q8JEuIWDkk+vZZK0BAoSuQGLKNTFY6tQ0duEIT69Ec8cePYxmmF2LgZ2PJMaIb3C4mNkxP3YA9njQvXmgZe2qCg7WqVfu2YQOs5YBECny7/3utTGmwqgQgocfF1uzkgzF0Lnjjhe9mCxptKYSXbnrh82xVce3HAuWCYV4E39rhZmmciJLRyiLxqqcKX07VjiLtEKcTxbXCkhQNcath1Ujd7myyBK6lYWhKZ6nkSZF2ijJr2hfT3Og+JE11QNzY3WXn5sqm9yjmq8SiuP4v3ZGV0ineRgxVafODwzT9lOpaqBs/FWm36YvIejsQakT7saH4c7Qmf5CvysH2JVW/TxbKC3/pTx/0btc4UstU9e9FARzpcKiv2cRfKsb6/yBNMth2xaoliBfRvTCZ0a5wSN/TRBW5Ux4IVCrow/Zbz2i5oNXA9Y5ApzUBVQk/4uIgC/4y4IpSIKiG8H/wsLkEgVywCyJRVmMF2ttnQik7jr4Y5xQz7GfzOUa2UdU=) 2025-05-25 03:28:26.528891 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKlGdtMhaSXKmsrXqbNF2c3cSIpYZf+1zcK/jEMaWHQDbEjfcA3YfCb8hClnQgg2Rf/lzEn7mES35mcGx52cBzI=) 2025-05-25 03:28:26.529866 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICZMeDtCJhro/6rSxr201yhYKvmBUoFXNnHwLr+USyHV) 2025-05-25 03:28:26.530180 | orchestrator | 2025-05-25 03:28:26.531511 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 03:28:26.532290 | orchestrator | Sunday 25 May 2025 03:28:26 +0000 (0:00:01.031) 0:00:11.701 ************ 2025-05-25 03:28:27.591766 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCjCIZvNmuhgicNTqxLpX+LRkHRJeSIIO3XJPwZPeIV7h5IABkfvIM9EErZAKGVk8cPYMwsjiVuJSuzL6Qaw00s=) 2025-05-25 03:28:27.592690 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxPgsPJR8gnGq5d5mSjB+7r+E4dqL3IOcS4GqxYeh6iBiDaQTeNsHvrHkqF8U2cWWTND57DupD7No2UTst9k93cUip9KYr2x9z9yCJHD+ie0ZWu/oKQGrnwkqd7DDfma4IH5BgitrndoVm5evrPB+tuSnSYVvb2B9xiZfXLrmEG123P5XKjjFe4vN1zerI4H/pYJhsYwZVXSN9+xfocBhDOwVm1Fl2wCzmtXMi3+uD1MzwF0mcQbliWXeEV+J8sFvDfkkbmd3pxn3xxct5OnHri3Y39apiA6pTcRP2szVQElDqqKP989WDSKa+QmD7i9teFkLn5/bS3Nn0rBKACzD3yLoSExcOmHUr55GMbsGvIzGhBiE9BF7JJF9RWKOlRBKYxXUSW3WJ/n90RVyhOc0KN1/MoFP3wrq/MXzpgQvHeQXTXy65dKTfXbtpYJlshAxc36JIwQ1KjeNOLhEZmrul8BlOxzRHAKbJrPjDd1aOQ6YEK4DK2nJ454bHLhrH6Y8=) 2025-05-25 03:28:27.592816 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIObVzqzJ7uEE/6v1kI8KZV3XPTZa5Aq9fRNfwtZWEtBb) 2025-05-25 03:28:27.593223 | orchestrator | 2025-05-25 03:28:27.593875 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 03:28:27.594174 | orchestrator | Sunday 25 May 2025 03:28:27 +0000 (0:00:01.063) 0:00:12.765 ************ 2025-05-25 03:28:28.618968 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA+OvKC2EvKTiZ5EM5yAOfWqIu82SlXcKhkGCL1eVkYurRNLlV3ok/BEnNGy4jBdlSB8DzsLojxFcFpYoNayc38=) 2025-05-25 03:28:28.619075 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDwCXUTplm8YGmzPChQ3lmmwahzkwGULlmn2dBU0nHIGa7VEgOS7ismCrAZMNDAOEeRhNMM++ugXR1FGp1HwLBDWcT0VjEj+FGgNk1DlfVmLcT4tcI30fVPQvyEy0YR3+vhwQbvRYdk5ryOuALiK6bCs1UGMPNYXhNct4qegB3/jdiUxrapWU85G+jXvvqQhhGYPUI1fr1y0xS7oKOCTmvRYKfwsnoZ3hwevFLyjpnbUklcUM5zt9svNs1ybK74JyoKhQovs5lhqPI3nkzkocvIrlDMLyk6Udasjlv67760I2To9nyxi/5o95WLI/E5yxPQuo37PgZpV+gDrBTksYMBHi0q15/YcKiuOtBkBxjKTwT0ovpJaOfOewnENBaR4gP5T/8YBz0HSxAt1FlD/wvBzskHoq7WudyfO5U5Am54QeCvzGYqtVWibjzAdKuF9xovtpTicHbnCa49Sg1OLG2urCkqqJOncwx7d2hpd1Ct7rCVYY2P0Y4Z5KjNH0yTkDE=) 2025-05-25 03:28:28.619664 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJusM90KHvF94kfUWbnufV9bBjrMNscfdJ8LCwrduz7O) 2025-05-25 03:28:28.620721 | orchestrator | 2025-05-25 03:28:28.621557 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-25 03:28:28.622266 | orchestrator | Sunday 25 May 2025 03:28:28 +0000 (0:00:01.027) 0:00:13.793 ************ 2025-05-25 03:28:33.821274 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-25 03:28:33.821390 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-25 03:28:33.823050 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-25 03:28:33.823733 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-25 03:28:33.824943 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-25 03:28:33.825717 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-25 03:28:33.827399 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-25 03:28:33.828603 | orchestrator | 2025-05-25 03:28:33.829295 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-25 03:28:33.830182 | orchestrator | Sunday 25 May 2025 03:28:33 +0000 (0:00:05.202) 0:00:18.995 ************ 2025-05-25 03:28:33.993708 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-25 03:28:33.994476 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-25 03:28:33.994678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-25 03:28:33.997067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-25 03:28:33.997469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-25 03:28:33.998151 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-25 03:28:33.998493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-25 03:28:33.999510 | orchestrator | 2025-05-25 03:28:34.000678 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 03:28:34.001525 | orchestrator | Sunday 25 May 2025 03:28:33 +0000 (0:00:00.173) 0:00:19.169 ************ 2025-05-25 03:28:35.045298 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDo7kYrsr0Huw2c9/76EGPfc98JHr4F7mYBytXPLoITAljwVID/+kgwar2Q+IJqxyfVcR2/q8iy00vzj+nMJ/qWZ+5hKZGDsiagP3WXlZz+X5mFsyk+JsvDgv4UcBq54ocm/EcFSuF52rX/xJ9drbLGUeMgIjpD7ZAVc/uGH5saBD3BVT3lJcnrWHDB6t1foubwZqNRYl53wZVWl+ugYRdikNRUlsRSrnpdJvCtlT5yp7ZeswO2tDOVMKaIptxZjOuy13rhItacFOzDPZmm2MlmTCuI0dIepgMlkh6vpzPVp0SKXIxS+74IRK2JQTzzEfJKdbhXaO+bNzFXrejnK7qJXdpAaTcV8d+NIKygymO2DaB835LqGEoLlKHCFq1kDvBARl7rGdW4T5qiW2Wx4YGgaQQ8aOVwiyZEf8Vzn073p+mczVYJsJwX4X0s8f529/0Sw+9h+8+OV9I0/nLsxz/bIfGXK90vplyjxa1kzNn3+psrhfW74RifhnyS4twnJRc=) 2025-05-25 03:28:35.045636 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBIyDqymZ4zWssUk5AHED0Dq7EHeGJrdCUYMIG2MFVp5bn1o5zSI5tqkAyEWthgR8oAVFuFA1ex7EFRTez89wsM=) 2025-05-25 03:28:35.046927 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKsnj8w0wMR9EYSud726adEUjctT9D+w/90EbjhbP6th) 2025-05-25 03:28:35.047592 | orchestrator | 2025-05-25 03:28:35.048301 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 03:28:35.048808 | orchestrator | Sunday 25 May 2025 03:28:35 +0000 (0:00:01.049) 0:00:20.218 ************ 2025-05-25 03:28:36.072057 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEZ8RtiYkNTn3r7GiI09ZDXkFXmYgSU3gjJdHR/HJZz0tSDtx+fHqPM9JR0ocfy/F+oiWGkdrezRtIGL/X2AZ/I=) 2025-05-25 03:28:36.073348 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVNB2HDFPIXvWcEfdBlZ+CqcfjUnJSOpB0keXMdr21wv+6NBgpX9byCHNzAFm+ycG47N57sEcr7DZbXRT44Nc7KQ+nxpmjYCXVMEeBbepGx60/gp2p/8aUcgO+z1CG6ZwiQyDnCNKcRDe5z5AatbCqVZiDvmUrjYBYGiH2OVHKe00mooxkm1UWAJ/lv63ARNWQUUZa61vTJ7C9k9zOZJtCcJLF7zeZltUYpvOgrJ79VfUClF6YDmA/y2Y4RF1eiSPy7yl4p9Zd3G2kSiadjYIxQ2ZUcI5tkOg2ZkfEbmgs0zHw8RN5YWJ5Iy7aEDtDrjSSTAEFuUlreWxNgND2rM2NPay48213Am8Cdx6TjbWcE32o41S2l8rbYil/w9l2FHVOREeokQLSzyaeqx/1k4X2VvXePGqc6kw0k9EE6dZrtNx2bR+kuLAyBlfhz2xo1HxeGk2psMc+fnUo44iIAKx4ZsieLqHNL1r5tkZv6YSu8VDt+PcI282flniC76bqsyM=) 2025-05-25 03:28:36.073388 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPaYfZAwTqsGtkjkUTNjoEIb5xD6yOusSBl5h4BIvg8D) 2025-05-25 03:28:36.074922 | orchestrator | 2025-05-25 03:28:36.075950 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 03:28:36.076720 | orchestrator | Sunday 25 May 2025 03:28:36 +0000 (0:00:01.028) 0:00:21.247 ************ 2025-05-25 03:28:37.139959 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAG+3TyPj3UNHQVtyvUF3Lep99sQuAOEQRTwFhNZ8yLjP/VFGgJu8mIpwxpypwGL87OfGkKGD4VQ9J9B+K+v+cAzOH/CbIOSf41B+xcUvqYWyml4b16/AHue+6AVF/1RtGSOKp7M1CtUjoXqGtB88Ev6DzZzgZU1YIlSmU66NG/7L+D3LOvr8dx3MDgc9L0H0OXvP8/0zZSd0uIxqBwgu5LU2Mocb+b44aJsDIU7MDqyRq7ilifLtHoZMkb0eOuyS8AVQxCUqxi1QaR71MKY/iIrtOoYIBM4obaSknRF7EfJQw9DyC4KlV2x1PlMrfKMzfI01PMT4B2Oi1qaK3NbdO1zBsw7aDYj5suuZUxsdDd7d1FD0gzZZl+AXuLZ6igX/Dlq23dyJl+C+T7qIO5BR03QUL2+PfxqWahYRB3fc7s52h/wmKU7Vx+Y00vYFAy6qhvoeT+pVxC/aqRqbzCRiqaY4A/jctXdYqI7djXn+4RhzBLWUjooMNzWpHO/pWB88=) 2025-05-25 03:28:37.140157 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP5REx7T0gm3arYLRULaMPR0+cK0j0xqyxHHezAfGUhTPxENtbhSkWeAzGwpENFJNlbc4cpSUIMaq8FBetk8dCw=) 2025-05-25 03:28:37.140979 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB7sdL4/NdznBzhaOTZpoPLZ31nbzVn6hDAQkvTjZwTr) 2025-05-25 03:28:37.142003 | orchestrator | 2025-05-25 03:28:37.142907 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 03:28:37.143547 | orchestrator | Sunday 25 May 2025 03:28:37 +0000 (0:00:01.066) 0:00:22.313 ************ 2025-05-25 03:28:38.253055 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+U0wlxEUQbWMJmUOY1SbRqrpLA+gmGRXkkqogYPdlbQ3YkVyDibZpz9GSg9WtJRl/1bQhBE6JWx5wUEh4oxG1dmiD13jgz2KQA6DZx9nKNW0Ehino4oNbfEeUxlkdFglqg8kYl1c0UlN12i3noy8/fTDIT4GTKGhKkZzgHouaA8LLPeHCleTKXgCiVrtyAVBAWG0aqOk5I2TbUR55pdVXqi0QwTpSobKBfk00qrE8DWfrsEecyY2XxEt7JyhXlgGdO9wgX8hqOclluQGofqze+kX/oMI8sbp6WyIlCEULQs92gMPMBw54VNOnH/J/LKMUmyt2YJOQD87uD1s+nUvYyqzVxHLipZibFh8bxW4nVJx9d39Q+dEp2czTzOt9DCqPfqQfwVTUY1a6266Qk2MRx4i5OMMLbrR6NnhLWNPt8Re8adW7sVOn6aSsNpaJptfM5S2DkhdzSUWJ7d1fL9Bf03UsJnUC/NcLChtxrFHDJXtRtTBN7H7mo8u7E800zCs=) 2025-05-25 03:28:38.253317 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDE900ulEMfKPUIcDV5KlIepa1z/RQCVFC9juZ8K+cihYlnr3fE6Avu4H1FPvhxPhzAk1PJpzJ8q9mhhtvZs+Us=) 2025-05-25 03:28:38.253950 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE3uGg5lMRrN+fLNw7Zc2CpFjE71bg1ucb7asbxa3Kcw) 2025-05-25 03:28:38.255003 | orchestrator | 2025-05-25 03:28:38.256291 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 03:28:38.256926 | orchestrator | Sunday 25 May 2025 03:28:38 +0000 (0:00:01.113) 0:00:23.426 ************ 2025-05-25 03:28:39.304499 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQ2v3BN8q8JEuIWDkk+vZZK0BAoSuQGLKNTFY6tQ0duEIT69Ec8cePYxmmF2LgZ2PJMaIb3C4mNkxP3YA9njQvXmgZe2qCg7WqVfu2YQOs5YBECny7/3utTGmwqgQgocfF1uzkgzF0Lnjjhe9mCxptKYSXbnrh82xVce3HAuWCYV4E39rhZmmciJLRyiLxqqcKX07VjiLtEKcTxbXCkhQNcath1Ujd7myyBK6lYWhKZ6nkSZF2ijJr2hfT3Og+JE11QNzY3WXn5sqm9yjmq8SiuP4v3ZGV0ineRgxVafODwzT9lOpaqBs/FWm36YvIejsQakT7saH4c7Qmf5CvysH2JVW/TxbKC3/pTx/0btc4UstU9e9FARzpcKiv2cRfKsb6/yBNMth2xaoliBfRvTCZ0a5wSN/TRBW5Ux4IVCrow/Zbz2i5oNXA9Y5ApzUBVQk/4uIgC/4y4IpSIKiG8H/wsLkEgVywCyJRVmMF2ttnQik7jr4Y5xQz7GfzOUa2UdU=) 2025-05-25 03:28:39.304977 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKlGdtMhaSXKmsrXqbNF2c3cSIpYZf+1zcK/jEMaWHQDbEjfcA3YfCb8hClnQgg2Rf/lzEn7mES35mcGx52cBzI=) 2025-05-25 03:28:39.306402 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICZMeDtCJhro/6rSxr201yhYKvmBUoFXNnHwLr+USyHV) 2025-05-25 03:28:39.307076 | orchestrator | 2025-05-25 03:28:39.307862 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 03:28:39.308506 | orchestrator | Sunday 25 May 2025 03:28:39 +0000 (0:00:01.052) 0:00:24.478 ************ 2025-05-25 03:28:40.363852 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCjCIZvNmuhgicNTqxLpX+LRkHRJeSIIO3XJPwZPeIV7h5IABkfvIM9EErZAKGVk8cPYMwsjiVuJSuzL6Qaw00s=) 2025-05-25 03:28:40.365148 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxPgsPJR8gnGq5d5mSjB+7r+E4dqL3IOcS4GqxYeh6iBiDaQTeNsHvrHkqF8U2cWWTND57DupD7No2UTst9k93cUip9KYr2x9z9yCJHD+ie0ZWu/oKQGrnwkqd7DDfma4IH5BgitrndoVm5evrPB+tuSnSYVvb2B9xiZfXLrmEG123P5XKjjFe4vN1zerI4H/pYJhsYwZVXSN9+xfocBhDOwVm1Fl2wCzmtXMi3+uD1MzwF0mcQbliWXeEV+J8sFvDfkkbmd3pxn3xxct5OnHri3Y39apiA6pTcRP2szVQElDqqKP989WDSKa+QmD7i9teFkLn5/bS3Nn0rBKACzD3yLoSExcOmHUr55GMbsGvIzGhBiE9BF7JJF9RWKOlRBKYxXUSW3WJ/n90RVyhOc0KN1/MoFP3wrq/MXzpgQvHeQXTXy65dKTfXbtpYJlshAxc36JIwQ1KjeNOLhEZmrul8BlOxzRHAKbJrPjDd1aOQ6YEK4DK2nJ454bHLhrH6Y8=) 2025-05-25 03:28:40.367074 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIObVzqzJ7uEE/6v1kI8KZV3XPTZa5Aq9fRNfwtZWEtBb) 2025-05-25 03:28:40.367474 | orchestrator | 2025-05-25 03:28:40.368644 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 03:28:40.369146 | orchestrator | Sunday 25 May 2025 03:28:40 +0000 (0:00:01.060) 0:00:25.538 ************ 2025-05-25 03:28:41.438091 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDwCXUTplm8YGmzPChQ3lmmwahzkwGULlmn2dBU0nHIGa7VEgOS7ismCrAZMNDAOEeRhNMM++ugXR1FGp1HwLBDWcT0VjEj+FGgNk1DlfVmLcT4tcI30fVPQvyEy0YR3+vhwQbvRYdk5ryOuALiK6bCs1UGMPNYXhNct4qegB3/jdiUxrapWU85G+jXvvqQhhGYPUI1fr1y0xS7oKOCTmvRYKfwsnoZ3hwevFLyjpnbUklcUM5zt9svNs1ybK74JyoKhQovs5lhqPI3nkzkocvIrlDMLyk6Udasjlv67760I2To9nyxi/5o95WLI/E5yxPQuo37PgZpV+gDrBTksYMBHi0q15/YcKiuOtBkBxjKTwT0ovpJaOfOewnENBaR4gP5T/8YBz0HSxAt1FlD/wvBzskHoq7WudyfO5U5Am54QeCvzGYqtVWibjzAdKuF9xovtpTicHbnCa49Sg1OLG2urCkqqJOncwx7d2hpd1Ct7rCVYY2P0Y4Z5KjNH0yTkDE=) 2025-05-25 03:28:41.438225 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJusM90KHvF94kfUWbnufV9bBjrMNscfdJ8LCwrduz7O) 2025-05-25 03:28:41.438359 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA+OvKC2EvKTiZ5EM5yAOfWqIu82SlXcKhkGCL1eVkYurRNLlV3ok/BEnNGy4jBdlSB8DzsLojxFcFpYoNayc38=) 2025-05-25 03:28:41.439346 | orchestrator | 2025-05-25 03:28:41.440238 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-25 03:28:41.440927 | orchestrator | Sunday 25 May 2025 03:28:41 +0000 (0:00:01.072) 0:00:26.610 ************ 2025-05-25 03:28:41.815407 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-25 03:28:41.816742 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-25 03:28:41.817135 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-25 03:28:41.818594 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-25 03:28:41.819649 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-25 03:28:41.820610 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-25 03:28:41.821466 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-25 03:28:41.822678 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:28:41.823193 | orchestrator | 2025-05-25 03:28:41.824166 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-25 03:28:41.824910 | orchestrator | Sunday 25 May 2025 03:28:41 +0000 (0:00:00.380) 0:00:26.991 ************ 2025-05-25 03:28:41.870434 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:28:41.871191 | orchestrator | 2025-05-25 03:28:41.872168 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-25 03:28:41.872908 | orchestrator | Sunday 25 May 2025 03:28:41 +0000 (0:00:00.055) 0:00:27.046 ************ 2025-05-25 03:28:41.923087 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:28:41.923846 | orchestrator | 2025-05-25 03:28:41.924555 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-25 03:28:41.925493 | orchestrator | Sunday 25 May 2025 03:28:41 +0000 (0:00:00.052) 0:00:27.099 ************ 2025-05-25 03:28:42.413393 | orchestrator | changed: [testbed-manager] 2025-05-25 03:28:42.414064 | orchestrator | 2025-05-25 03:28:42.415108 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:28:42.415824 | orchestrator | 2025-05-25 03:28:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:28:42.415900 | orchestrator | 2025-05-25 03:28:42 | INFO  | Please wait and do not abort execution. 2025-05-25 03:28:42.417167 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 03:28:42.417998 | orchestrator | 2025-05-25 03:28:42.419326 | orchestrator | 2025-05-25 03:28:42.420459 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:28:42.421544 | orchestrator | Sunday 25 May 2025 03:28:42 +0000 (0:00:00.488) 0:00:27.587 ************ 2025-05-25 03:28:42.422604 | orchestrator | =============================================================================== 2025-05-25 03:28:42.423445 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.00s 2025-05-25 03:28:42.426422 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.20s 2025-05-25 03:28:42.428111 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-05-25 03:28:42.429844 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-05-25 03:28:42.430624 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-05-25 03:28:42.431409 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-05-25 03:28:42.431751 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-05-25 03:28:42.432324 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-25 03:28:42.432925 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-25 03:28:42.433245 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-25 03:28:42.433775 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-25 03:28:42.434415 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-25 03:28:42.434760 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-25 03:28:42.435215 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-25 03:28:42.435683 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-25 03:28:42.436172 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-25 03:28:42.436590 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.49s 2025-05-25 03:28:42.437033 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.38s 2025-05-25 03:28:42.437368 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-05-25 03:28:42.437777 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-05-25 03:28:42.836773 | orchestrator | + osism apply squid 2025-05-25 03:28:44.512677 | orchestrator | 2025-05-25 03:28:44 | INFO  | Task 9aec5aa7-5a5b-458c-a6fb-fa1f69c65b90 (squid) was prepared for execution. 2025-05-25 03:28:44.512744 | orchestrator | 2025-05-25 03:28:44 | INFO  | It takes a moment until task 9aec5aa7-5a5b-458c-a6fb-fa1f69c65b90 (squid) has been started and output is visible here. 2025-05-25 03:28:48.567593 | orchestrator | 2025-05-25 03:28:48.567912 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-25 03:28:48.567982 | orchestrator | 2025-05-25 03:28:48.568239 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-25 03:28:48.568611 | orchestrator | Sunday 25 May 2025 03:28:48 +0000 (0:00:00.209) 0:00:00.209 ************ 2025-05-25 03:28:48.667049 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-25 03:28:48.667710 | orchestrator | 2025-05-25 03:28:48.669326 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-25 03:28:48.670304 | orchestrator | Sunday 25 May 2025 03:28:48 +0000 (0:00:00.100) 0:00:00.309 ************ 2025-05-25 03:28:50.078162 | orchestrator | ok: [testbed-manager] 2025-05-25 03:28:50.078269 | orchestrator | 2025-05-25 03:28:50.079870 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-25 03:28:50.080787 | orchestrator | Sunday 25 May 2025 03:28:50 +0000 (0:00:01.409) 0:00:01.719 ************ 2025-05-25 03:28:51.257178 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-25 03:28:51.257286 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-25 03:28:51.257862 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-25 03:28:51.259057 | orchestrator | 2025-05-25 03:28:51.260498 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-25 03:28:51.261194 | orchestrator | Sunday 25 May 2025 03:28:51 +0000 (0:00:01.179) 0:00:02.898 ************ 2025-05-25 03:28:52.317507 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-25 03:28:52.318684 | orchestrator | 2025-05-25 03:28:52.319886 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-25 03:28:52.320981 | orchestrator | Sunday 25 May 2025 03:28:52 +0000 (0:00:01.059) 0:00:03.958 ************ 2025-05-25 03:28:52.669247 | orchestrator | ok: [testbed-manager] 2025-05-25 03:28:52.669352 | orchestrator | 2025-05-25 03:28:52.669692 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-25 03:28:52.671675 | orchestrator | Sunday 25 May 2025 03:28:52 +0000 (0:00:00.350) 0:00:04.309 ************ 2025-05-25 03:28:53.590855 | orchestrator | changed: [testbed-manager] 2025-05-25 03:28:53.591430 | orchestrator | 2025-05-25 03:28:53.592093 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-25 03:28:53.592918 | orchestrator | Sunday 25 May 2025 03:28:53 +0000 (0:00:00.919) 0:00:05.229 ************ 2025-05-25 03:29:25.670614 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-25 03:29:25.670778 | orchestrator | ok: [testbed-manager] 2025-05-25 03:29:25.670916 | orchestrator | 2025-05-25 03:29:25.671367 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-25 03:29:25.671557 | orchestrator | Sunday 25 May 2025 03:29:25 +0000 (0:00:32.081) 0:00:37.310 ************ 2025-05-25 03:29:38.091497 | orchestrator | changed: [testbed-manager] 2025-05-25 03:29:38.091618 | orchestrator | 2025-05-25 03:29:38.091636 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-25 03:29:38.091650 | orchestrator | Sunday 25 May 2025 03:29:38 +0000 (0:00:12.416) 0:00:49.727 ************ 2025-05-25 03:30:38.169513 | orchestrator | Pausing for 60 seconds 2025-05-25 03:30:38.169623 | orchestrator | changed: [testbed-manager] 2025-05-25 03:30:38.169640 | orchestrator | 2025-05-25 03:30:38.169653 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-25 03:30:38.169666 | orchestrator | Sunday 25 May 2025 03:30:38 +0000 (0:01:00.075) 0:01:49.802 ************ 2025-05-25 03:30:38.224855 | orchestrator | ok: [testbed-manager] 2025-05-25 03:30:38.226078 | orchestrator | 2025-05-25 03:30:38.226688 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-25 03:30:38.227175 | orchestrator | Sunday 25 May 2025 03:30:38 +0000 (0:00:00.064) 0:01:49.867 ************ 2025-05-25 03:30:38.849511 | orchestrator | changed: [testbed-manager] 2025-05-25 03:30:38.850992 | orchestrator | 2025-05-25 03:30:38.851623 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:30:38.853368 | orchestrator | 2025-05-25 03:30:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:30:38.853393 | orchestrator | 2025-05-25 03:30:38 | INFO  | Please wait and do not abort execution. 2025-05-25 03:30:38.854135 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:30:38.855089 | orchestrator | 2025-05-25 03:30:38.855863 | orchestrator | 2025-05-25 03:30:38.856856 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:30:38.858291 | orchestrator | Sunday 25 May 2025 03:30:38 +0000 (0:00:00.624) 0:01:50.491 ************ 2025-05-25 03:30:38.858662 | orchestrator | =============================================================================== 2025-05-25 03:30:38.859063 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-05-25 03:30:38.859352 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.08s 2025-05-25 03:30:38.859711 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.42s 2025-05-25 03:30:38.860112 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.41s 2025-05-25 03:30:38.860410 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.18s 2025-05-25 03:30:38.861176 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.06s 2025-05-25 03:30:38.862069 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2025-05-25 03:30:38.862259 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.62s 2025-05-25 03:30:38.862377 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-05-25 03:30:38.862825 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-05-25 03:30:38.863121 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-05-25 03:30:39.318962 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-25 03:30:39.319683 | orchestrator | ++ semver latest 9.0.0 2025-05-25 03:30:39.368511 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-25 03:30:39.368581 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-25 03:30:39.369276 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-25 03:30:41.048698 | orchestrator | 2025-05-25 03:30:41 | INFO  | Task d20edef6-4647-4981-8d47-32b2239739f5 (operator) was prepared for execution. 2025-05-25 03:30:41.048802 | orchestrator | 2025-05-25 03:30:41 | INFO  | It takes a moment until task d20edef6-4647-4981-8d47-32b2239739f5 (operator) has been started and output is visible here. 2025-05-25 03:30:44.872763 | orchestrator | 2025-05-25 03:30:44.872879 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-25 03:30:44.872974 | orchestrator | 2025-05-25 03:30:44.873053 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 03:30:44.874262 | orchestrator | Sunday 25 May 2025 03:30:44 +0000 (0:00:00.144) 0:00:00.144 ************ 2025-05-25 03:30:48.221882 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:30:48.223113 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:30:48.224400 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:30:48.224948 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:30:48.225800 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:30:48.227095 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:30:48.228005 | orchestrator | 2025-05-25 03:30:48.229055 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-25 03:30:48.229357 | orchestrator | Sunday 25 May 2025 03:30:48 +0000 (0:00:03.354) 0:00:03.499 ************ 2025-05-25 03:30:49.011592 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:30:49.011767 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:30:49.014604 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:30:49.014635 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:30:49.014648 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:30:49.015647 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:30:49.016243 | orchestrator | 2025-05-25 03:30:49.016729 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-25 03:30:49.017855 | orchestrator | 2025-05-25 03:30:49.018375 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-25 03:30:49.019317 | orchestrator | Sunday 25 May 2025 03:30:49 +0000 (0:00:00.790) 0:00:04.290 ************ 2025-05-25 03:30:49.085688 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:30:49.106713 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:30:49.135283 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:30:49.194286 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:30:49.194367 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:30:49.194382 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:30:49.194394 | orchestrator | 2025-05-25 03:30:49.194407 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-25 03:30:49.194421 | orchestrator | Sunday 25 May 2025 03:30:49 +0000 (0:00:00.179) 0:00:04.469 ************ 2025-05-25 03:30:49.285142 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:30:49.303126 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:30:49.330736 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:30:49.372732 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:30:49.373328 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:30:49.374066 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:30:49.374456 | orchestrator | 2025-05-25 03:30:49.375053 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-25 03:30:49.375835 | orchestrator | Sunday 25 May 2025 03:30:49 +0000 (0:00:00.183) 0:00:04.653 ************ 2025-05-25 03:30:49.972923 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:30:49.974305 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:30:49.974766 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:30:49.978392 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:30:49.978580 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:30:49.980407 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:30:49.981763 | orchestrator | 2025-05-25 03:30:49.982892 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-25 03:30:49.983667 | orchestrator | Sunday 25 May 2025 03:30:49 +0000 (0:00:00.598) 0:00:05.252 ************ 2025-05-25 03:30:50.787996 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:30:50.789174 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:30:50.792002 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:30:50.792666 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:30:50.793443 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:30:50.794288 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:30:50.794544 | orchestrator | 2025-05-25 03:30:50.795547 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-25 03:30:50.796141 | orchestrator | Sunday 25 May 2025 03:30:50 +0000 (0:00:00.814) 0:00:06.066 ************ 2025-05-25 03:30:51.946095 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-25 03:30:51.946216 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-25 03:30:51.946234 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-25 03:30:51.946748 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-25 03:30:51.947554 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-25 03:30:51.948446 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-25 03:30:51.949529 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-25 03:30:51.949857 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-25 03:30:51.950850 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-25 03:30:51.951766 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-25 03:30:51.952841 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-25 03:30:51.953302 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-25 03:30:51.954309 | orchestrator | 2025-05-25 03:30:51.955085 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-25 03:30:51.955422 | orchestrator | Sunday 25 May 2025 03:30:51 +0000 (0:00:01.154) 0:00:07.221 ************ 2025-05-25 03:30:53.196344 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:30:53.196450 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:30:53.196518 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:30:53.197058 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:30:53.199170 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:30:53.200091 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:30:53.200589 | orchestrator | 2025-05-25 03:30:53.201620 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-25 03:30:53.202189 | orchestrator | Sunday 25 May 2025 03:30:53 +0000 (0:00:01.250) 0:00:08.472 ************ 2025-05-25 03:30:54.378666 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-25 03:30:54.379680 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-25 03:30:54.379719 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-25 03:30:54.404684 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-25 03:30:54.405314 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-25 03:30:54.405967 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-25 03:30:54.406884 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-25 03:30:54.407831 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-25 03:30:54.408126 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-25 03:30:54.408830 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-25 03:30:54.409728 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-25 03:30:54.410082 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-25 03:30:54.410835 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-25 03:30:54.411666 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-25 03:30:54.413084 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-25 03:30:54.413843 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-25 03:30:54.414650 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-25 03:30:54.415415 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-25 03:30:54.416287 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-25 03:30:54.417072 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-25 03:30:54.417848 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-25 03:30:54.418241 | orchestrator | 2025-05-25 03:30:54.419345 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-25 03:30:54.419859 | orchestrator | Sunday 25 May 2025 03:30:54 +0000 (0:00:01.211) 0:00:09.684 ************ 2025-05-25 03:30:54.997802 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:30:54.998088 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:30:55.001885 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:30:55.002734 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:30:55.004189 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:30:55.004799 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:30:55.005300 | orchestrator | 2025-05-25 03:30:55.006657 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-25 03:30:55.007600 | orchestrator | Sunday 25 May 2025 03:30:54 +0000 (0:00:00.591) 0:00:10.276 ************ 2025-05-25 03:30:55.067109 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:30:55.088795 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:30:55.114320 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:30:55.158141 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:30:55.161047 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:30:55.161076 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:30:55.161090 | orchestrator | 2025-05-25 03:30:55.161339 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-25 03:30:55.162185 | orchestrator | Sunday 25 May 2025 03:30:55 +0000 (0:00:00.160) 0:00:10.437 ************ 2025-05-25 03:30:55.840447 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-25 03:30:55.841096 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:30:55.843128 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-25 03:30:55.844461 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-25 03:30:55.845527 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:30:55.846387 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-25 03:30:55.847233 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:30:55.848181 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-25 03:30:55.848561 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:30:55.849324 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:30:55.850265 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-25 03:30:55.850700 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:30:55.851582 | orchestrator | 2025-05-25 03:30:55.852441 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-25 03:30:55.852776 | orchestrator | Sunday 25 May 2025 03:30:55 +0000 (0:00:00.679) 0:00:11.116 ************ 2025-05-25 03:30:55.903753 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:30:55.931122 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:30:55.951198 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:30:55.985021 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:30:55.985083 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:30:55.985096 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:30:55.985109 | orchestrator | 2025-05-25 03:30:55.987452 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-25 03:30:55.987949 | orchestrator | Sunday 25 May 2025 03:30:55 +0000 (0:00:00.145) 0:00:11.261 ************ 2025-05-25 03:30:56.026864 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:30:56.046191 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:30:56.076172 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:30:56.096654 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:30:56.126255 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:30:56.126538 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:30:56.127390 | orchestrator | 2025-05-25 03:30:56.128248 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-25 03:30:56.128677 | orchestrator | Sunday 25 May 2025 03:30:56 +0000 (0:00:00.144) 0:00:11.406 ************ 2025-05-25 03:30:56.210584 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:30:56.237885 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:30:56.259334 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:30:56.288886 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:30:56.291895 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:30:56.291951 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:30:56.291964 | orchestrator | 2025-05-25 03:30:56.292308 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-25 03:30:56.292974 | orchestrator | Sunday 25 May 2025 03:30:56 +0000 (0:00:00.160) 0:00:11.567 ************ 2025-05-25 03:30:56.941348 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:30:56.942563 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:30:56.942638 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:30:56.942995 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:30:56.943933 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:30:56.944806 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:30:56.945732 | orchestrator | 2025-05-25 03:30:56.945765 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-25 03:30:56.946558 | orchestrator | Sunday 25 May 2025 03:30:56 +0000 (0:00:00.653) 0:00:12.220 ************ 2025-05-25 03:30:57.035817 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:30:57.064111 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:30:57.158926 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:30:57.159122 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:30:57.160009 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:30:57.160368 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:30:57.160760 | orchestrator | 2025-05-25 03:30:57.161645 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:30:57.161697 | orchestrator | 2025-05-25 03:30:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:30:57.161876 | orchestrator | 2025-05-25 03:30:57 | INFO  | Please wait and do not abort execution. 2025-05-25 03:30:57.162507 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 03:30:57.162806 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 03:30:57.163448 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 03:30:57.163746 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 03:30:57.164504 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 03:30:57.165120 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 03:30:57.165349 | orchestrator | 2025-05-25 03:30:57.165724 | orchestrator | 2025-05-25 03:30:57.166209 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:30:57.166585 | orchestrator | Sunday 25 May 2025 03:30:57 +0000 (0:00:00.218) 0:00:12.439 ************ 2025-05-25 03:30:57.167056 | orchestrator | =============================================================================== 2025-05-25 03:30:57.167505 | orchestrator | Gathering Facts --------------------------------------------------------- 3.35s 2025-05-25 03:30:57.167777 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.25s 2025-05-25 03:30:57.168091 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.21s 2025-05-25 03:30:57.168603 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.15s 2025-05-25 03:30:57.168787 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2025-05-25 03:30:57.169197 | orchestrator | Do not require tty for all users ---------------------------------------- 0.79s 2025-05-25 03:30:57.169523 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.68s 2025-05-25 03:30:57.169724 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2025-05-25 03:30:57.170109 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-05-25 03:30:57.170411 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2025-05-25 03:30:57.171017 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-05-25 03:30:57.171096 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2025-05-25 03:30:57.171443 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-05-25 03:30:57.171775 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-05-25 03:30:57.172485 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-05-25 03:30:57.172645 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2025-05-25 03:30:57.173038 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-05-25 03:30:57.607500 | orchestrator | + osism apply --environment custom facts 2025-05-25 03:30:59.260899 | orchestrator | 2025-05-25 03:30:59 | INFO  | Trying to run play facts in environment custom 2025-05-25 03:30:59.324736 | orchestrator | 2025-05-25 03:30:59 | INFO  | Task 10479ed4-70bb-416b-b4ff-7fd39bdb1bcb (facts) was prepared for execution. 2025-05-25 03:30:59.324794 | orchestrator | 2025-05-25 03:30:59 | INFO  | It takes a moment until task 10479ed4-70bb-416b-b4ff-7fd39bdb1bcb (facts) has been started and output is visible here. 2025-05-25 03:31:03.098496 | orchestrator | 2025-05-25 03:31:03.098608 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-25 03:31:03.099380 | orchestrator | 2025-05-25 03:31:03.099559 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-25 03:31:03.101305 | orchestrator | Sunday 25 May 2025 03:31:03 +0000 (0:00:00.066) 0:00:00.066 ************ 2025-05-25 03:31:04.444956 | orchestrator | ok: [testbed-manager] 2025-05-25 03:31:04.445058 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:31:04.445074 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:31:04.445086 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:31:04.446291 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:31:04.449576 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:31:04.449628 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:31:04.449643 | orchestrator | 2025-05-25 03:31:04.450642 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-25 03:31:04.451014 | orchestrator | Sunday 25 May 2025 03:31:04 +0000 (0:00:01.343) 0:00:01.410 ************ 2025-05-25 03:31:05.550678 | orchestrator | ok: [testbed-manager] 2025-05-25 03:31:05.551858 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:31:05.551908 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:31:05.551944 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:31:05.552047 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:31:05.552809 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:31:05.553351 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:31:05.554129 | orchestrator | 2025-05-25 03:31:05.555073 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-25 03:31:05.555978 | orchestrator | 2025-05-25 03:31:05.556726 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-25 03:31:05.557270 | orchestrator | Sunday 25 May 2025 03:31:05 +0000 (0:00:01.107) 0:00:02.517 ************ 2025-05-25 03:31:05.656694 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:31:05.657277 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:31:05.657624 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:31:05.658749 | orchestrator | 2025-05-25 03:31:05.658817 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-25 03:31:05.659518 | orchestrator | Sunday 25 May 2025 03:31:05 +0000 (0:00:00.108) 0:00:02.626 ************ 2025-05-25 03:31:05.836772 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:31:05.837045 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:31:05.838468 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:31:05.839098 | orchestrator | 2025-05-25 03:31:05.839980 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-25 03:31:05.840686 | orchestrator | Sunday 25 May 2025 03:31:05 +0000 (0:00:00.180) 0:00:02.807 ************ 2025-05-25 03:31:06.014594 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:31:06.015161 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:31:06.016124 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:31:06.017066 | orchestrator | 2025-05-25 03:31:06.017757 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-25 03:31:06.018784 | orchestrator | Sunday 25 May 2025 03:31:06 +0000 (0:00:00.177) 0:00:02.985 ************ 2025-05-25 03:31:06.127070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:31:06.127796 | orchestrator | 2025-05-25 03:31:06.130452 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-25 03:31:06.131419 | orchestrator | Sunday 25 May 2025 03:31:06 +0000 (0:00:00.111) 0:00:03.096 ************ 2025-05-25 03:31:06.559377 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:31:06.560069 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:31:06.562264 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:31:06.562305 | orchestrator | 2025-05-25 03:31:06.563221 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-25 03:31:06.563939 | orchestrator | Sunday 25 May 2025 03:31:06 +0000 (0:00:00.429) 0:00:03.526 ************ 2025-05-25 03:31:06.672221 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:31:06.672321 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:31:06.672558 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:31:06.672885 | orchestrator | 2025-05-25 03:31:06.673526 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-25 03:31:06.675236 | orchestrator | Sunday 25 May 2025 03:31:06 +0000 (0:00:00.115) 0:00:03.641 ************ 2025-05-25 03:31:07.752953 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:31:07.753085 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:31:07.753410 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:31:07.754059 | orchestrator | 2025-05-25 03:31:07.754451 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-25 03:31:07.755416 | orchestrator | Sunday 25 May 2025 03:31:07 +0000 (0:00:01.078) 0:00:04.720 ************ 2025-05-25 03:31:08.253255 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:31:08.254194 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:31:08.254757 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:31:08.255855 | orchestrator | 2025-05-25 03:31:08.256476 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-25 03:31:08.257270 | orchestrator | Sunday 25 May 2025 03:31:08 +0000 (0:00:00.501) 0:00:05.222 ************ 2025-05-25 03:31:09.426103 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:31:09.426203 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:31:09.426708 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:31:09.427231 | orchestrator | 2025-05-25 03:31:09.428640 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-25 03:31:09.429252 | orchestrator | Sunday 25 May 2025 03:31:09 +0000 (0:00:01.171) 0:00:06.393 ************ 2025-05-25 03:31:22.703671 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:31:22.703793 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:31:22.703808 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:31:22.704282 | orchestrator | 2025-05-25 03:31:22.704716 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-25 03:31:22.705490 | orchestrator | Sunday 25 May 2025 03:31:22 +0000 (0:00:13.274) 0:00:19.668 ************ 2025-05-25 03:31:22.758298 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:31:22.801104 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:31:22.801377 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:31:22.803075 | orchestrator | 2025-05-25 03:31:22.804073 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-25 03:31:22.804769 | orchestrator | Sunday 25 May 2025 03:31:22 +0000 (0:00:00.102) 0:00:19.771 ************ 2025-05-25 03:31:30.091679 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:31:30.091799 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:31:30.091814 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:31:30.093196 | orchestrator | 2025-05-25 03:31:30.093738 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-25 03:31:30.095123 | orchestrator | Sunday 25 May 2025 03:31:30 +0000 (0:00:07.285) 0:00:27.056 ************ 2025-05-25 03:31:30.519855 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:31:30.520560 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:31:30.521631 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:31:30.522607 | orchestrator | 2025-05-25 03:31:30.523415 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-25 03:31:30.524560 | orchestrator | Sunday 25 May 2025 03:31:30 +0000 (0:00:00.432) 0:00:27.489 ************ 2025-05-25 03:31:34.060453 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-25 03:31:34.060571 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-25 03:31:34.060592 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-25 03:31:34.060604 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-25 03:31:34.060615 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-25 03:31:34.060869 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-25 03:31:34.063068 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-25 03:31:34.063756 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-25 03:31:34.064265 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-25 03:31:34.064798 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-25 03:31:34.065530 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-25 03:31:34.066105 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-25 03:31:34.066539 | orchestrator | 2025-05-25 03:31:34.067008 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-25 03:31:34.067603 | orchestrator | Sunday 25 May 2025 03:31:34 +0000 (0:00:03.536) 0:00:31.026 ************ 2025-05-25 03:31:35.321615 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:31:35.321723 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:31:35.323297 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:31:35.323575 | orchestrator | 2025-05-25 03:31:35.324544 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-25 03:31:35.327842 | orchestrator | 2025-05-25 03:31:35.327855 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-25 03:31:35.327861 | orchestrator | Sunday 25 May 2025 03:31:35 +0000 (0:00:01.262) 0:00:32.289 ************ 2025-05-25 03:31:39.030214 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:31:39.030321 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:31:39.030935 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:31:39.031342 | orchestrator | ok: [testbed-manager] 2025-05-25 03:31:39.032493 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:31:39.033498 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:31:39.034180 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:31:39.034479 | orchestrator | 2025-05-25 03:31:39.036376 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:31:39.036637 | orchestrator | 2025-05-25 03:31:39 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:31:39.036663 | orchestrator | 2025-05-25 03:31:39 | INFO  | Please wait and do not abort execution. 2025-05-25 03:31:39.038735 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:31:39.038830 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:31:39.038866 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:31:39.039116 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:31:39.039714 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:31:39.040647 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:31:39.042067 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:31:39.042742 | orchestrator | 2025-05-25 03:31:39.043681 | orchestrator | 2025-05-25 03:31:39.044647 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:31:39.045365 | orchestrator | Sunday 25 May 2025 03:31:39 +0000 (0:00:03.710) 0:00:35.999 ************ 2025-05-25 03:31:39.045690 | orchestrator | =============================================================================== 2025-05-25 03:31:39.046303 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.27s 2025-05-25 03:31:39.046695 | orchestrator | Install required packages (Debian) -------------------------------------- 7.29s 2025-05-25 03:31:39.047172 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.71s 2025-05-25 03:31:39.047622 | orchestrator | Copy fact files --------------------------------------------------------- 3.54s 2025-05-25 03:31:39.048123 | orchestrator | Create custom facts directory ------------------------------------------- 1.34s 2025-05-25 03:31:39.048600 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.26s 2025-05-25 03:31:39.049029 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.17s 2025-05-25 03:31:39.049643 | orchestrator | Copy fact file ---------------------------------------------------------- 1.11s 2025-05-25 03:31:39.050126 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.08s 2025-05-25 03:31:39.050483 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.50s 2025-05-25 03:31:39.051166 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2025-05-25 03:31:39.051456 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2025-05-25 03:31:39.051706 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2025-05-25 03:31:39.052117 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2025-05-25 03:31:39.052528 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-05-25 03:31:39.052994 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.11s 2025-05-25 03:31:39.053430 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-05-25 03:31:39.053821 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-05-25 03:31:39.471368 | orchestrator | + osism apply bootstrap 2025-05-25 03:31:41.176004 | orchestrator | 2025-05-25 03:31:41 | INFO  | Task 81e50c3d-dbdb-423f-892b-21cd680a2cf9 (bootstrap) was prepared for execution. 2025-05-25 03:31:41.176113 | orchestrator | 2025-05-25 03:31:41 | INFO  | It takes a moment until task 81e50c3d-dbdb-423f-892b-21cd680a2cf9 (bootstrap) has been started and output is visible here. 2025-05-25 03:31:45.090283 | orchestrator | 2025-05-25 03:31:45.090428 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-25 03:31:45.090448 | orchestrator | 2025-05-25 03:31:45.090525 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-25 03:31:45.092105 | orchestrator | Sunday 25 May 2025 03:31:45 +0000 (0:00:00.125) 0:00:00.125 ************ 2025-05-25 03:31:45.164235 | orchestrator | ok: [testbed-manager] 2025-05-25 03:31:45.183800 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:31:45.210180 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:31:45.235679 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:31:45.303176 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:31:45.303262 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:31:45.303616 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:31:45.303667 | orchestrator | 2025-05-25 03:31:45.303721 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-25 03:31:45.305830 | orchestrator | 2025-05-25 03:31:45.307226 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-25 03:31:45.308395 | orchestrator | Sunday 25 May 2025 03:31:45 +0000 (0:00:00.219) 0:00:00.344 ************ 2025-05-25 03:31:48.812042 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:31:48.812158 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:31:48.812467 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:31:48.812883 | orchestrator | ok: [testbed-manager] 2025-05-25 03:31:48.814373 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:31:48.814472 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:31:48.815296 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:31:48.815704 | orchestrator | 2025-05-25 03:31:48.816376 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-25 03:31:48.817298 | orchestrator | 2025-05-25 03:31:48.817745 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-25 03:31:48.818131 | orchestrator | Sunday 25 May 2025 03:31:48 +0000 (0:00:03.507) 0:00:03.852 ************ 2025-05-25 03:31:48.876364 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-25 03:31:48.916613 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-25 03:31:48.916693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-25 03:31:48.961987 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 03:31:48.962178 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-25 03:31:48.962255 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-25 03:31:48.962461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 03:31:48.962719 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-25 03:31:48.963051 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 03:31:48.964515 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-25 03:31:49.007008 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-25 03:31:49.010845 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-25 03:31:49.010894 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-25 03:31:49.012245 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-25 03:31:49.017195 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-25 03:31:49.017246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-25 03:31:49.349154 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-25 03:31:49.349300 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-25 03:31:49.350124 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-25 03:31:49.352891 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-25 03:31:49.352915 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-25 03:31:49.352926 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 03:31:49.352937 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:31:49.352948 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-25 03:31:49.352989 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-25 03:31:49.353001 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-25 03:31:49.353291 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:31:49.353576 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 03:31:49.354131 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-25 03:31:49.354295 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-25 03:31:49.354752 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:31:49.355158 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-25 03:31:49.356160 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 03:31:49.356398 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-25 03:31:49.357089 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-25 03:31:49.357349 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-25 03:31:49.357777 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 03:31:49.358488 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-25 03:31:49.358753 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-25 03:31:49.359128 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-25 03:31:49.359818 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 03:31:49.360249 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-25 03:31:49.360679 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-25 03:31:49.361027 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-25 03:31:49.361386 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-25 03:31:49.361684 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 03:31:49.362135 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:31:49.362611 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-25 03:31:49.363069 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-25 03:31:49.365465 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:31:49.366427 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-25 03:31:49.367122 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-25 03:31:49.367885 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:31:49.369536 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-25 03:31:49.369904 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-25 03:31:49.371149 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:31:49.371819 | orchestrator | 2025-05-25 03:31:49.372620 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-25 03:31:49.376609 | orchestrator | 2025-05-25 03:31:49.377129 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-25 03:31:49.377469 | orchestrator | Sunday 25 May 2025 03:31:49 +0000 (0:00:00.536) 0:00:04.388 ************ 2025-05-25 03:31:50.575659 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:31:50.575826 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:31:50.576698 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:31:50.578565 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:31:50.578661 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:31:50.579067 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:31:50.579715 | orchestrator | ok: [testbed-manager] 2025-05-25 03:31:50.580593 | orchestrator | 2025-05-25 03:31:50.581142 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-25 03:31:50.581869 | orchestrator | Sunday 25 May 2025 03:31:50 +0000 (0:00:01.226) 0:00:05.614 ************ 2025-05-25 03:31:51.732038 | orchestrator | ok: [testbed-manager] 2025-05-25 03:31:51.732146 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:31:51.733863 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:31:51.734353 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:31:51.735683 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:31:51.737239 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:31:51.737951 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:31:51.738367 | orchestrator | 2025-05-25 03:31:51.738859 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-25 03:31:51.739826 | orchestrator | Sunday 25 May 2025 03:31:51 +0000 (0:00:01.154) 0:00:06.769 ************ 2025-05-25 03:31:51.998900 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:31:51.999033 | orchestrator | 2025-05-25 03:31:51.999471 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-25 03:31:51.999947 | orchestrator | Sunday 25 May 2025 03:31:51 +0000 (0:00:00.269) 0:00:07.038 ************ 2025-05-25 03:31:53.997257 | orchestrator | changed: [testbed-manager] 2025-05-25 03:31:53.997369 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:31:53.997385 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:31:53.997673 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:31:53.998476 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:31:53.998502 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:31:53.998698 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:31:53.999131 | orchestrator | 2025-05-25 03:31:53.999863 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-25 03:31:54.000077 | orchestrator | Sunday 25 May 2025 03:31:53 +0000 (0:00:01.996) 0:00:09.035 ************ 2025-05-25 03:31:54.081249 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:31:54.283206 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:31:54.283313 | orchestrator | 2025-05-25 03:31:54.283392 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-25 03:31:54.284152 | orchestrator | Sunday 25 May 2025 03:31:54 +0000 (0:00:00.287) 0:00:09.322 ************ 2025-05-25 03:31:55.297337 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:31:55.297444 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:31:55.297459 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:31:55.297631 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:31:55.298381 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:31:55.299164 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:31:55.300042 | orchestrator | 2025-05-25 03:31:55.300675 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-25 03:31:55.302343 | orchestrator | Sunday 25 May 2025 03:31:55 +0000 (0:00:01.009) 0:00:10.331 ************ 2025-05-25 03:31:55.381376 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:31:55.886287 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:31:55.887262 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:31:55.889538 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:31:55.890479 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:31:55.891378 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:31:55.892032 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:31:55.892451 | orchestrator | 2025-05-25 03:31:55.892905 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-25 03:31:55.893340 | orchestrator | Sunday 25 May 2025 03:31:55 +0000 (0:00:00.592) 0:00:10.923 ************ 2025-05-25 03:31:55.980660 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:31:56.006312 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:31:56.033720 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:31:56.316642 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:31:56.317288 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:31:56.317947 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:31:56.321435 | orchestrator | ok: [testbed-manager] 2025-05-25 03:31:56.321499 | orchestrator | 2025-05-25 03:31:56.321514 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-25 03:31:56.321540 | orchestrator | Sunday 25 May 2025 03:31:56 +0000 (0:00:00.432) 0:00:11.356 ************ 2025-05-25 03:31:56.393799 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:31:56.422223 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:31:56.449453 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:31:56.469207 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:31:56.541083 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:31:56.541417 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:31:56.541918 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:31:56.543544 | orchestrator | 2025-05-25 03:31:56.546172 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-25 03:31:56.547335 | orchestrator | Sunday 25 May 2025 03:31:56 +0000 (0:00:00.224) 0:00:11.580 ************ 2025-05-25 03:31:56.852247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:31:56.853380 | orchestrator | 2025-05-25 03:31:56.853731 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-25 03:31:56.854543 | orchestrator | Sunday 25 May 2025 03:31:56 +0000 (0:00:00.310) 0:00:11.891 ************ 2025-05-25 03:31:57.121391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:31:57.124823 | orchestrator | 2025-05-25 03:31:57.124862 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-25 03:31:57.124917 | orchestrator | Sunday 25 May 2025 03:31:57 +0000 (0:00:00.267) 0:00:12.159 ************ 2025-05-25 03:31:58.473994 | orchestrator | ok: [testbed-manager] 2025-05-25 03:31:58.474143 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:31:58.474156 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:31:58.474225 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:31:58.474657 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:31:58.475124 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:31:58.475582 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:31:58.476131 | orchestrator | 2025-05-25 03:31:58.476677 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-25 03:31:58.477131 | orchestrator | Sunday 25 May 2025 03:31:58 +0000 (0:00:01.349) 0:00:13.508 ************ 2025-05-25 03:31:58.558613 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:31:58.580871 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:31:58.613513 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:31:58.639044 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:31:58.697677 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:31:58.699372 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:31:58.700537 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:31:58.701771 | orchestrator | 2025-05-25 03:31:58.702493 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-25 03:31:58.703201 | orchestrator | Sunday 25 May 2025 03:31:58 +0000 (0:00:00.227) 0:00:13.736 ************ 2025-05-25 03:31:59.214335 | orchestrator | ok: [testbed-manager] 2025-05-25 03:31:59.214446 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:31:59.215536 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:31:59.217319 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:31:59.218211 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:31:59.219269 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:31:59.220340 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:31:59.221160 | orchestrator | 2025-05-25 03:31:59.222621 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-25 03:31:59.223181 | orchestrator | Sunday 25 May 2025 03:31:59 +0000 (0:00:00.515) 0:00:14.251 ************ 2025-05-25 03:31:59.325417 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:31:59.350562 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:31:59.374919 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:31:59.444236 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:31:59.445940 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:31:59.447901 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:31:59.447935 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:31:59.447947 | orchestrator | 2025-05-25 03:31:59.448798 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-25 03:31:59.449620 | orchestrator | Sunday 25 May 2025 03:31:59 +0000 (0:00:00.232) 0:00:14.483 ************ 2025-05-25 03:31:59.966570 | orchestrator | ok: [testbed-manager] 2025-05-25 03:31:59.967536 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:31:59.969048 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:31:59.970127 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:31:59.974240 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:31:59.975007 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:31:59.975891 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:31:59.976809 | orchestrator | 2025-05-25 03:31:59.977533 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-25 03:31:59.978259 | orchestrator | Sunday 25 May 2025 03:31:59 +0000 (0:00:00.521) 0:00:15.005 ************ 2025-05-25 03:32:01.270307 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:01.271223 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:32:01.271813 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:32:01.272595 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:32:01.274144 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:32:01.275478 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:32:01.276315 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:32:01.277348 | orchestrator | 2025-05-25 03:32:01.277913 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-25 03:32:01.278788 | orchestrator | Sunday 25 May 2025 03:32:01 +0000 (0:00:01.303) 0:00:16.308 ************ 2025-05-25 03:32:02.272088 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:02.274521 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:02.277153 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:02.280464 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:02.281654 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:02.282435 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:02.283304 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:02.284092 | orchestrator | 2025-05-25 03:32:02.284695 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-25 03:32:02.285860 | orchestrator | Sunday 25 May 2025 03:32:02 +0000 (0:00:01.001) 0:00:17.310 ************ 2025-05-25 03:32:02.585082 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:32:02.585188 | orchestrator | 2025-05-25 03:32:02.585620 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-25 03:32:02.586744 | orchestrator | Sunday 25 May 2025 03:32:02 +0000 (0:00:00.312) 0:00:17.623 ************ 2025-05-25 03:32:02.661017 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:32:03.816623 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:32:03.816735 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:32:03.816752 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:32:03.817129 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:32:03.817242 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:32:03.817259 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:32:03.818902 | orchestrator | 2025-05-25 03:32:03.819760 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-25 03:32:03.820680 | orchestrator | Sunday 25 May 2025 03:32:03 +0000 (0:00:01.228) 0:00:18.852 ************ 2025-05-25 03:32:03.890275 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:03.917199 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:03.941907 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:03.973419 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:04.042686 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:04.044616 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:04.048620 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:04.049683 | orchestrator | 2025-05-25 03:32:04.050121 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-25 03:32:04.050759 | orchestrator | Sunday 25 May 2025 03:32:04 +0000 (0:00:00.229) 0:00:19.081 ************ 2025-05-25 03:32:04.136165 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:04.171158 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:04.196998 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:04.222263 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:04.285196 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:04.285422 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:04.286111 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:04.287118 | orchestrator | 2025-05-25 03:32:04.287704 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-25 03:32:04.288042 | orchestrator | Sunday 25 May 2025 03:32:04 +0000 (0:00:00.243) 0:00:19.325 ************ 2025-05-25 03:32:04.386688 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:04.410428 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:04.441445 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:04.472235 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:04.555640 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:04.557180 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:04.558864 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:04.560341 | orchestrator | 2025-05-25 03:32:04.561304 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-25 03:32:04.562156 | orchestrator | Sunday 25 May 2025 03:32:04 +0000 (0:00:00.268) 0:00:19.594 ************ 2025-05-25 03:32:04.874383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:32:04.877109 | orchestrator | 2025-05-25 03:32:04.877199 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-25 03:32:04.878257 | orchestrator | Sunday 25 May 2025 03:32:04 +0000 (0:00:00.318) 0:00:19.913 ************ 2025-05-25 03:32:05.398758 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:05.398844 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:05.399120 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:05.399785 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:05.400710 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:05.401325 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:05.401869 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:05.402598 | orchestrator | 2025-05-25 03:32:05.403025 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-25 03:32:05.403545 | orchestrator | Sunday 25 May 2025 03:32:05 +0000 (0:00:00.520) 0:00:20.433 ************ 2025-05-25 03:32:05.483523 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:32:05.510134 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:32:05.532631 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:32:05.559027 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:32:05.642744 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:32:05.642836 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:32:05.642850 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:32:05.643428 | orchestrator | 2025-05-25 03:32:05.643933 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-25 03:32:05.645358 | orchestrator | Sunday 25 May 2025 03:32:05 +0000 (0:00:00.246) 0:00:20.680 ************ 2025-05-25 03:32:06.685268 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:06.687092 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:06.687122 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:06.687479 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:06.689326 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:32:06.689702 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:32:06.690771 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:32:06.691447 | orchestrator | 2025-05-25 03:32:06.692129 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-25 03:32:06.693062 | orchestrator | Sunday 25 May 2025 03:32:06 +0000 (0:00:01.040) 0:00:21.720 ************ 2025-05-25 03:32:07.254619 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:07.254965 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:07.256043 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:07.256299 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:07.259619 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:07.259921 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:07.260646 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:07.261220 | orchestrator | 2025-05-25 03:32:07.262293 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-25 03:32:07.262510 | orchestrator | Sunday 25 May 2025 03:32:07 +0000 (0:00:00.573) 0:00:22.293 ************ 2025-05-25 03:32:08.391026 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:08.391133 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:08.392822 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:08.392888 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:08.393592 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:32:08.394216 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:32:08.395094 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:32:08.395886 | orchestrator | 2025-05-25 03:32:08.396714 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-25 03:32:08.397611 | orchestrator | Sunday 25 May 2025 03:32:08 +0000 (0:00:01.135) 0:00:23.429 ************ 2025-05-25 03:32:22.141519 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:22.141632 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:22.141762 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:22.141779 | orchestrator | changed: [testbed-manager] 2025-05-25 03:32:22.141790 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:32:22.141799 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:32:22.141808 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:32:22.141819 | orchestrator | 2025-05-25 03:32:22.141867 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-25 03:32:22.142303 | orchestrator | Sunday 25 May 2025 03:32:22 +0000 (0:00:13.743) 0:00:37.172 ************ 2025-05-25 03:32:22.221558 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:22.246116 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:22.273209 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:22.307324 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:22.361931 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:22.362938 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:22.363869 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:22.364733 | orchestrator | 2025-05-25 03:32:22.365608 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-25 03:32:22.366640 | orchestrator | Sunday 25 May 2025 03:32:22 +0000 (0:00:00.228) 0:00:37.401 ************ 2025-05-25 03:32:22.437083 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:22.466275 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:22.489738 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:22.518225 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:22.575865 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:22.576509 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:22.578525 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:22.578732 | orchestrator | 2025-05-25 03:32:22.579946 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-25 03:32:22.580847 | orchestrator | Sunday 25 May 2025 03:32:22 +0000 (0:00:00.213) 0:00:37.615 ************ 2025-05-25 03:32:22.652500 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:22.679391 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:22.705362 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:22.742535 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:22.814186 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:22.814825 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:22.816023 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:22.816732 | orchestrator | 2025-05-25 03:32:22.817174 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-25 03:32:22.817557 | orchestrator | Sunday 25 May 2025 03:32:22 +0000 (0:00:00.238) 0:00:37.853 ************ 2025-05-25 03:32:23.112778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:32:23.112926 | orchestrator | 2025-05-25 03:32:23.113094 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-25 03:32:23.114366 | orchestrator | Sunday 25 May 2025 03:32:23 +0000 (0:00:00.298) 0:00:38.152 ************ 2025-05-25 03:32:24.727171 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:24.727274 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:24.730168 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:24.731506 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:24.731530 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:24.736041 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:24.736503 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:24.737246 | orchestrator | 2025-05-25 03:32:24.738091 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-25 03:32:24.739428 | orchestrator | Sunday 25 May 2025 03:32:24 +0000 (0:00:01.611) 0:00:39.764 ************ 2025-05-25 03:32:25.793968 | orchestrator | changed: [testbed-manager] 2025-05-25 03:32:25.794291 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:32:25.797585 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:32:25.798585 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:32:25.799759 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:32:25.800875 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:32:25.804268 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:32:25.805277 | orchestrator | 2025-05-25 03:32:25.806345 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-25 03:32:25.807129 | orchestrator | Sunday 25 May 2025 03:32:25 +0000 (0:00:01.067) 0:00:40.831 ************ 2025-05-25 03:32:26.595591 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:26.596880 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:26.597279 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:26.598723 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:26.599600 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:26.600384 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:26.601749 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:26.602124 | orchestrator | 2025-05-25 03:32:26.602821 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-25 03:32:26.603230 | orchestrator | Sunday 25 May 2025 03:32:26 +0000 (0:00:00.802) 0:00:41.634 ************ 2025-05-25 03:32:26.886910 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:32:26.888211 | orchestrator | 2025-05-25 03:32:26.889930 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-25 03:32:26.890662 | orchestrator | Sunday 25 May 2025 03:32:26 +0000 (0:00:00.289) 0:00:41.923 ************ 2025-05-25 03:32:27.849814 | orchestrator | changed: [testbed-manager] 2025-05-25 03:32:27.850114 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:32:27.851033 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:32:27.852564 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:32:27.852675 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:32:27.853189 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:32:27.853846 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:32:27.854337 | orchestrator | 2025-05-25 03:32:27.855050 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-25 03:32:27.855311 | orchestrator | Sunday 25 May 2025 03:32:27 +0000 (0:00:00.962) 0:00:42.885 ************ 2025-05-25 03:32:27.959654 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:32:27.979969 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:32:28.009884 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:32:28.158657 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:32:28.159517 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:32:28.162868 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:32:28.162895 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:32:28.163186 | orchestrator | 2025-05-25 03:32:28.164231 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-25 03:32:28.165124 | orchestrator | Sunday 25 May 2025 03:32:28 +0000 (0:00:00.312) 0:00:43.197 ************ 2025-05-25 03:32:39.296381 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:32:39.296501 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:32:39.296518 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:32:39.297980 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:32:39.299468 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:32:39.300818 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:32:39.302063 | orchestrator | changed: [testbed-manager] 2025-05-25 03:32:39.302712 | orchestrator | 2025-05-25 03:32:39.303604 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-25 03:32:39.304536 | orchestrator | Sunday 25 May 2025 03:32:39 +0000 (0:00:11.132) 0:00:54.329 ************ 2025-05-25 03:32:40.704438 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:40.704538 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:40.704602 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:40.705806 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:40.706182 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:40.706988 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:40.707873 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:40.708855 | orchestrator | 2025-05-25 03:32:40.709359 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-25 03:32:40.710339 | orchestrator | Sunday 25 May 2025 03:32:40 +0000 (0:00:01.408) 0:00:55.738 ************ 2025-05-25 03:32:41.583398 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:41.583530 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:41.584182 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:41.584963 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:41.585352 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:41.586163 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:41.587789 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:41.591769 | orchestrator | 2025-05-25 03:32:41.592119 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-25 03:32:41.592963 | orchestrator | Sunday 25 May 2025 03:32:41 +0000 (0:00:00.875) 0:00:56.614 ************ 2025-05-25 03:32:41.665173 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:41.697766 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:41.721354 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:41.744950 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:41.813190 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:41.814814 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:41.816636 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:41.818281 | orchestrator | 2025-05-25 03:32:41.819228 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-25 03:32:41.820210 | orchestrator | Sunday 25 May 2025 03:32:41 +0000 (0:00:00.237) 0:00:56.851 ************ 2025-05-25 03:32:41.907142 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:41.935596 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:41.957740 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:41.987283 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:42.045819 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:42.046789 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:42.047782 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:42.048450 | orchestrator | 2025-05-25 03:32:42.048832 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-25 03:32:42.049709 | orchestrator | Sunday 25 May 2025 03:32:42 +0000 (0:00:00.232) 0:00:57.084 ************ 2025-05-25 03:32:42.366664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:32:42.367202 | orchestrator | 2025-05-25 03:32:42.367955 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-25 03:32:42.369116 | orchestrator | Sunday 25 May 2025 03:32:42 +0000 (0:00:00.319) 0:00:57.404 ************ 2025-05-25 03:32:43.877473 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:43.879560 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:43.880151 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:43.880866 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:43.882092 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:43.882675 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:43.883401 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:43.884074 | orchestrator | 2025-05-25 03:32:43.884503 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-25 03:32:43.885245 | orchestrator | Sunday 25 May 2025 03:32:43 +0000 (0:00:01.511) 0:00:58.915 ************ 2025-05-25 03:32:44.421798 | orchestrator | changed: [testbed-manager] 2025-05-25 03:32:44.422586 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:32:44.423222 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:32:44.423862 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:32:44.425092 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:32:44.426640 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:32:44.427343 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:32:44.427933 | orchestrator | 2025-05-25 03:32:44.428428 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-25 03:32:44.429199 | orchestrator | Sunday 25 May 2025 03:32:44 +0000 (0:00:00.545) 0:00:59.460 ************ 2025-05-25 03:32:44.503302 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:44.524488 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:44.550137 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:44.577398 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:44.638347 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:44.639487 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:44.640295 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:44.641404 | orchestrator | 2025-05-25 03:32:44.642185 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-25 03:32:44.643277 | orchestrator | Sunday 25 May 2025 03:32:44 +0000 (0:00:00.216) 0:00:59.677 ************ 2025-05-25 03:32:45.680179 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:45.680312 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:45.680328 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:45.682335 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:45.682851 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:45.684518 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:45.685585 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:45.686462 | orchestrator | 2025-05-25 03:32:45.687161 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-25 03:32:45.687807 | orchestrator | Sunday 25 May 2025 03:32:45 +0000 (0:00:01.036) 0:01:00.714 ************ 2025-05-25 03:32:47.192071 | orchestrator | changed: [testbed-manager] 2025-05-25 03:32:47.193519 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:32:47.194448 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:32:47.194738 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:32:47.195373 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:32:47.196304 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:32:47.197094 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:32:47.197909 | orchestrator | 2025-05-25 03:32:47.198611 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-25 03:32:47.199142 | orchestrator | Sunday 25 May 2025 03:32:47 +0000 (0:00:01.515) 0:01:02.230 ************ 2025-05-25 03:32:49.264894 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:32:49.268954 | orchestrator | ok: [testbed-manager] 2025-05-25 03:32:49.268989 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:32:49.271224 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:32:49.275275 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:32:49.275299 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:32:49.279927 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:32:49.283155 | orchestrator | 2025-05-25 03:32:49.283217 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-25 03:32:49.284143 | orchestrator | Sunday 25 May 2025 03:32:49 +0000 (0:00:02.072) 0:01:04.302 ************ 2025-05-25 03:33:26.649767 | orchestrator | ok: [testbed-manager] 2025-05-25 03:33:26.650216 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:33:26.650253 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:33:26.651772 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:33:26.652195 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:33:26.653093 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:33:26.655095 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:33:26.655139 | orchestrator | 2025-05-25 03:33:26.655884 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-25 03:33:26.656881 | orchestrator | Sunday 25 May 2025 03:33:26 +0000 (0:00:37.383) 0:01:41.685 ************ 2025-05-25 03:34:40.070210 | orchestrator | changed: [testbed-manager] 2025-05-25 03:34:40.070328 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:34:40.070344 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:34:40.070356 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:34:40.070475 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:34:40.072120 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:34:40.072700 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:34:40.073855 | orchestrator | 2025-05-25 03:34:40.074808 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-25 03:34:40.075827 | orchestrator | Sunday 25 May 2025 03:34:40 +0000 (0:01:13.417) 0:02:55.103 ************ 2025-05-25 03:34:41.605586 | orchestrator | ok: [testbed-manager] 2025-05-25 03:34:41.606232 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:34:41.606830 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:34:41.608961 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:34:41.609534 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:34:41.610983 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:34:41.612132 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:34:41.613228 | orchestrator | 2025-05-25 03:34:41.614156 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-25 03:34:41.615094 | orchestrator | Sunday 25 May 2025 03:34:41 +0000 (0:00:01.539) 0:02:56.643 ************ 2025-05-25 03:34:53.060569 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:34:53.060691 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:34:53.060706 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:34:53.062002 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:34:53.063000 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:34:53.063994 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:34:53.064939 | orchestrator | changed: [testbed-manager] 2025-05-25 03:34:53.066480 | orchestrator | 2025-05-25 03:34:53.067543 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-25 03:34:53.068398 | orchestrator | Sunday 25 May 2025 03:34:53 +0000 (0:00:11.452) 0:03:08.095 ************ 2025-05-25 03:34:53.415831 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-25 03:34:53.416330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-25 03:34:53.417260 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-25 03:34:53.420935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-25 03:34:53.421501 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-25 03:34:53.422459 | orchestrator | 2025-05-25 03:34:53.423476 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-25 03:34:53.424460 | orchestrator | Sunday 25 May 2025 03:34:53 +0000 (0:00:00.359) 0:03:08.454 ************ 2025-05-25 03:34:53.472157 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-25 03:34:53.499672 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:34:53.500006 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-25 03:34:53.530302 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:34:53.530769 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-25 03:34:53.561377 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:34:53.562672 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-25 03:34:53.586530 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:34:54.138614 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-25 03:34:54.139626 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-25 03:34:54.140225 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-25 03:34:54.140957 | orchestrator | 2025-05-25 03:34:54.141672 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-25 03:34:54.142842 | orchestrator | Sunday 25 May 2025 03:34:54 +0000 (0:00:00.722) 0:03:09.176 ************ 2025-05-25 03:34:54.246439 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-25 03:34:54.247605 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-25 03:34:54.251300 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-25 03:34:54.251382 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-25 03:34:54.251401 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-25 03:34:54.251418 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-25 03:34:54.251436 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-25 03:34:54.251512 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-25 03:34:54.252203 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-25 03:34:54.252625 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-25 03:34:54.253193 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-25 03:34:54.282662 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-25 03:34:54.283576 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-25 03:34:54.285246 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-25 03:34:54.314360 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-25 03:34:54.352979 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-25 03:34:54.353136 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:34:54.353152 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-25 03:34:54.353252 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-25 03:34:54.353697 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-25 03:34:54.354128 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-25 03:34:54.354992 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-25 03:34:54.355490 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-25 03:34:54.355824 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-25 03:34:54.356295 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-25 03:34:54.356622 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-25 03:34:54.362139 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-25 03:34:57.790277 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:34:57.793467 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-25 03:34:57.794296 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-25 03:34:57.795616 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-25 03:34:57.796780 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-25 03:34:57.797963 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-25 03:34:57.799185 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-25 03:34:57.799988 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-25 03:34:57.800863 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:34:57.806275 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-25 03:34:57.807290 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-25 03:34:57.807383 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-25 03:34:57.807898 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-25 03:34:57.808596 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-25 03:34:57.809349 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-25 03:34:57.809842 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-25 03:34:57.809863 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:34:57.810211 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-25 03:34:57.810617 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-25 03:34:57.810751 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-25 03:34:57.811375 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-25 03:34:57.811525 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-25 03:34:57.811957 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-25 03:34:57.812476 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-25 03:34:57.812751 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-25 03:34:57.813104 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-25 03:34:57.813537 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-25 03:34:57.813923 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-25 03:34:57.814123 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-25 03:34:57.814530 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-25 03:34:57.814837 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-25 03:34:57.815361 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-25 03:34:57.815748 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-25 03:34:57.815834 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-25 03:34:57.816336 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-25 03:34:57.816799 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-25 03:34:57.817086 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-25 03:34:57.817274 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-25 03:34:57.817732 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-25 03:34:57.817899 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-25 03:34:57.818369 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-25 03:34:57.818455 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-25 03:34:57.821291 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-25 03:34:57.821319 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-25 03:34:57.821331 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-25 03:34:57.821342 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-25 03:34:57.821353 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-25 03:34:57.821365 | orchestrator | 2025-05-25 03:34:57.821377 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-25 03:34:57.821388 | orchestrator | Sunday 25 May 2025 03:34:57 +0000 (0:00:03.649) 0:03:12.826 ************ 2025-05-25 03:34:58.342243 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-25 03:34:58.342420 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-25 03:34:58.343136 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-25 03:34:58.343541 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-25 03:34:58.343877 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-25 03:34:58.344532 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-25 03:34:58.345020 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-25 03:34:58.345458 | orchestrator | 2025-05-25 03:34:58.345868 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-25 03:34:58.346281 | orchestrator | Sunday 25 May 2025 03:34:58 +0000 (0:00:00.555) 0:03:13.381 ************ 2025-05-25 03:34:58.411175 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-25 03:34:58.439332 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:34:58.498693 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-25 03:34:58.525722 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:34:58.838114 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-25 03:34:58.839194 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:34:58.840441 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-25 03:34:58.841800 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:34:58.842416 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-25 03:34:58.843077 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-25 03:34:58.843940 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-25 03:34:58.844683 | orchestrator | 2025-05-25 03:34:58.845579 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-25 03:34:58.846282 | orchestrator | Sunday 25 May 2025 03:34:58 +0000 (0:00:00.494) 0:03:13.876 ************ 2025-05-25 03:34:58.890348 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-25 03:34:58.912902 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:34:58.994902 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-25 03:34:59.392392 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:34:59.392806 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-25 03:34:59.394090 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:34:59.394581 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-25 03:34:59.395188 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:34:59.396110 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-25 03:34:59.396782 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-25 03:34:59.397253 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-25 03:34:59.397779 | orchestrator | 2025-05-25 03:34:59.398240 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-25 03:34:59.398850 | orchestrator | Sunday 25 May 2025 03:34:59 +0000 (0:00:00.555) 0:03:14.431 ************ 2025-05-25 03:34:59.481670 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:34:59.513486 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:34:59.537570 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:34:59.561070 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:34:59.674735 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:34:59.675095 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:34:59.675961 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:34:59.676438 | orchestrator | 2025-05-25 03:34:59.676917 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-25 03:34:59.677693 | orchestrator | Sunday 25 May 2025 03:34:59 +0000 (0:00:00.282) 0:03:14.714 ************ 2025-05-25 03:35:05.199461 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:35:05.199586 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:35:05.199681 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:35:05.200224 | orchestrator | ok: [testbed-manager] 2025-05-25 03:35:05.201574 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:35:05.202733 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:35:05.203911 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:35:05.204995 | orchestrator | 2025-05-25 03:35:05.205526 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-25 03:35:05.206622 | orchestrator | Sunday 25 May 2025 03:35:05 +0000 (0:00:05.523) 0:03:20.237 ************ 2025-05-25 03:35:05.266901 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-25 03:35:05.308030 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:35:05.309491 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-25 03:35:05.310107 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-25 03:35:05.343641 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:35:05.380220 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:35:05.380705 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-25 03:35:05.427656 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-25 03:35:05.427826 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:35:05.500403 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-25 03:35:05.501187 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:35:05.501864 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:35:05.502564 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-25 03:35:05.503350 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:35:05.503977 | orchestrator | 2025-05-25 03:35:05.504818 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-25 03:35:05.505451 | orchestrator | Sunday 25 May 2025 03:35:05 +0000 (0:00:00.302) 0:03:20.539 ************ 2025-05-25 03:35:06.495781 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-25 03:35:06.496948 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-25 03:35:06.497407 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-25 03:35:06.498482 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-25 03:35:06.499936 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-25 03:35:06.502007 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-25 03:35:06.502177 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-25 03:35:06.503432 | orchestrator | 2025-05-25 03:35:06.504132 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-25 03:35:06.505287 | orchestrator | Sunday 25 May 2025 03:35:06 +0000 (0:00:00.993) 0:03:21.533 ************ 2025-05-25 03:35:06.986863 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:35:06.986966 | orchestrator | 2025-05-25 03:35:06.988312 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-25 03:35:06.989352 | orchestrator | Sunday 25 May 2025 03:35:06 +0000 (0:00:00.491) 0:03:22.025 ************ 2025-05-25 03:35:08.087701 | orchestrator | ok: [testbed-manager] 2025-05-25 03:35:08.087941 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:35:08.089432 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:35:08.089939 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:35:08.091552 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:35:08.091939 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:35:08.092504 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:35:08.093113 | orchestrator | 2025-05-25 03:35:08.093674 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-25 03:35:08.094423 | orchestrator | Sunday 25 May 2025 03:35:08 +0000 (0:00:01.099) 0:03:23.124 ************ 2025-05-25 03:35:08.653319 | orchestrator | ok: [testbed-manager] 2025-05-25 03:35:08.654945 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:35:08.655385 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:35:08.657255 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:35:08.657472 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:35:08.658380 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:35:08.658941 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:35:08.659826 | orchestrator | 2025-05-25 03:35:08.660301 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-25 03:35:08.660808 | orchestrator | Sunday 25 May 2025 03:35:08 +0000 (0:00:00.568) 0:03:23.693 ************ 2025-05-25 03:35:09.264975 | orchestrator | changed: [testbed-manager] 2025-05-25 03:35:09.268598 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:35:09.268636 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:35:09.268649 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:35:09.269688 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:35:09.270452 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:35:09.272083 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:35:09.272420 | orchestrator | 2025-05-25 03:35:09.272859 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-25 03:35:09.273737 | orchestrator | Sunday 25 May 2025 03:35:09 +0000 (0:00:00.608) 0:03:24.301 ************ 2025-05-25 03:35:09.854322 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:35:09.854942 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:35:09.855507 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:35:09.857766 | orchestrator | ok: [testbed-manager] 2025-05-25 03:35:09.857807 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:35:09.857819 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:35:09.857830 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:35:09.859663 | orchestrator | 2025-05-25 03:35:09.860409 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-25 03:35:09.861342 | orchestrator | Sunday 25 May 2025 03:35:09 +0000 (0:00:00.591) 0:03:24.893 ************ 2025-05-25 03:35:10.798149 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748142277.480603, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 03:35:10.798286 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748142308.8364701, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 03:35:10.798562 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748142315.7089348, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 03:35:10.801232 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748142311.2294166, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 03:35:10.801847 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748142323.7744863, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 03:35:10.802880 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748142311.54214, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 03:35:10.803791 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748142308.2187426, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 03:35:10.803828 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748142306.9259539, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 03:35:10.804224 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748142231.7419333, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 03:35:10.804708 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748142238.3016636, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 03:35:10.805406 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748142228.7508476, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 03:35:10.805548 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748142233.3724456, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 03:35:10.806252 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748142229.493443, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 03:35:10.806597 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748142239.6252613, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 03:35:10.806925 | orchestrator | 2025-05-25 03:35:10.807577 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-25 03:35:10.807845 | orchestrator | Sunday 25 May 2025 03:35:10 +0000 (0:00:00.942) 0:03:25.835 ************ 2025-05-25 03:35:11.878727 | orchestrator | changed: [testbed-manager] 2025-05-25 03:35:11.878832 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:35:11.878847 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:35:11.878858 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:35:11.879151 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:35:11.879908 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:35:11.880419 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:35:11.881263 | orchestrator | 2025-05-25 03:35:11.882319 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-25 03:35:11.882659 | orchestrator | Sunday 25 May 2025 03:35:11 +0000 (0:00:01.077) 0:03:26.912 ************ 2025-05-25 03:35:12.933020 | orchestrator | changed: [testbed-manager] 2025-05-25 03:35:12.938195 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:35:12.938234 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:35:12.938247 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:35:12.938258 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:35:12.938633 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:35:12.939712 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:35:12.940508 | orchestrator | 2025-05-25 03:35:12.941215 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-05-25 03:35:12.942153 | orchestrator | Sunday 25 May 2025 03:35:12 +0000 (0:00:01.057) 0:03:27.970 ************ 2025-05-25 03:35:14.031031 | orchestrator | changed: [testbed-manager] 2025-05-25 03:35:14.032105 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:35:14.032905 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:35:14.033287 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:35:14.034271 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:35:14.034767 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:35:14.035363 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:35:14.035908 | orchestrator | 2025-05-25 03:35:14.036452 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-25 03:35:14.037162 | orchestrator | Sunday 25 May 2025 03:35:14 +0000 (0:00:01.096) 0:03:29.067 ************ 2025-05-25 03:35:14.127464 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:35:14.163132 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:35:14.194967 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:35:14.232766 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:35:14.304859 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:35:14.305411 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:35:14.305842 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:35:14.306594 | orchestrator | 2025-05-25 03:35:14.307165 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-25 03:35:14.307880 | orchestrator | Sunday 25 May 2025 03:35:14 +0000 (0:00:00.275) 0:03:29.343 ************ 2025-05-25 03:35:15.043506 | orchestrator | ok: [testbed-manager] 2025-05-25 03:35:15.044631 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:35:15.046762 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:35:15.047412 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:35:15.049423 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:35:15.049668 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:35:15.050912 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:35:15.051703 | orchestrator | 2025-05-25 03:35:15.052675 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-25 03:35:15.053445 | orchestrator | Sunday 25 May 2025 03:35:15 +0000 (0:00:00.738) 0:03:30.081 ************ 2025-05-25 03:35:15.440824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:35:15.444529 | orchestrator | 2025-05-25 03:35:15.444557 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-25 03:35:15.445303 | orchestrator | Sunday 25 May 2025 03:35:15 +0000 (0:00:00.396) 0:03:30.478 ************ 2025-05-25 03:35:22.550564 | orchestrator | ok: [testbed-manager] 2025-05-25 03:35:22.550681 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:35:22.551441 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:35:22.554382 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:35:22.555081 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:35:22.556401 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:35:22.557180 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:35:22.558095 | orchestrator | 2025-05-25 03:35:22.558858 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-25 03:35:22.559460 | orchestrator | Sunday 25 May 2025 03:35:22 +0000 (0:00:07.108) 0:03:37.586 ************ 2025-05-25 03:35:23.737891 | orchestrator | ok: [testbed-manager] 2025-05-25 03:35:23.738198 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:35:23.739657 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:35:23.740481 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:35:23.741003 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:35:23.741794 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:35:23.742226 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:35:23.743342 | orchestrator | 2025-05-25 03:35:23.743947 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-25 03:35:23.744619 | orchestrator | Sunday 25 May 2025 03:35:23 +0000 (0:00:01.188) 0:03:38.775 ************ 2025-05-25 03:35:24.747536 | orchestrator | ok: [testbed-manager] 2025-05-25 03:35:24.750662 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:35:24.750761 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:35:24.750964 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:35:24.751354 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:35:24.751588 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:35:24.751944 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:35:24.752335 | orchestrator | 2025-05-25 03:35:24.752912 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-25 03:35:24.753660 | orchestrator | Sunday 25 May 2025 03:35:24 +0000 (0:00:01.009) 0:03:39.784 ************ 2025-05-25 03:35:25.262400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:35:25.262576 | orchestrator | 2025-05-25 03:35:25.263557 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-25 03:35:25.264031 | orchestrator | Sunday 25 May 2025 03:35:25 +0000 (0:00:00.516) 0:03:40.301 ************ 2025-05-25 03:35:33.077478 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:35:33.077606 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:35:33.077622 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:35:33.077700 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:35:33.078711 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:35:33.079629 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:35:33.080863 | orchestrator | changed: [testbed-manager] 2025-05-25 03:35:33.081529 | orchestrator | 2025-05-25 03:35:33.082671 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-25 03:35:33.083411 | orchestrator | Sunday 25 May 2025 03:35:33 +0000 (0:00:07.810) 0:03:48.112 ************ 2025-05-25 03:35:33.647132 | orchestrator | changed: [testbed-manager] 2025-05-25 03:35:33.647945 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:35:33.651956 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:35:33.653492 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:35:33.653721 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:35:33.655392 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:35:33.656504 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:35:33.657629 | orchestrator | 2025-05-25 03:35:33.659160 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-25 03:35:33.659713 | orchestrator | Sunday 25 May 2025 03:35:33 +0000 (0:00:00.573) 0:03:48.685 ************ 2025-05-25 03:35:34.740911 | orchestrator | changed: [testbed-manager] 2025-05-25 03:35:34.741103 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:35:34.742906 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:35:34.743548 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:35:34.744456 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:35:34.745559 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:35:34.746184 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:35:34.746657 | orchestrator | 2025-05-25 03:35:34.747360 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-25 03:35:34.748038 | orchestrator | Sunday 25 May 2025 03:35:34 +0000 (0:00:01.094) 0:03:49.779 ************ 2025-05-25 03:35:35.808953 | orchestrator | changed: [testbed-manager] 2025-05-25 03:35:35.809193 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:35:35.809807 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:35:35.810553 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:35:35.811513 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:35:35.812022 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:35:35.812467 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:35:35.813360 | orchestrator | 2025-05-25 03:35:35.813702 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-25 03:35:35.814298 | orchestrator | Sunday 25 May 2025 03:35:35 +0000 (0:00:01.067) 0:03:50.847 ************ 2025-05-25 03:35:35.929014 | orchestrator | ok: [testbed-manager] 2025-05-25 03:35:35.961377 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:35:36.003968 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:35:36.036375 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:35:36.099941 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:35:36.100383 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:35:36.101248 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:35:36.101749 | orchestrator | 2025-05-25 03:35:36.103591 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-25 03:35:36.103828 | orchestrator | Sunday 25 May 2025 03:35:36 +0000 (0:00:00.292) 0:03:51.139 ************ 2025-05-25 03:35:36.207495 | orchestrator | ok: [testbed-manager] 2025-05-25 03:35:36.240253 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:35:36.276229 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:35:36.311942 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:35:36.395442 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:35:36.396074 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:35:36.396535 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:35:36.397303 | orchestrator | 2025-05-25 03:35:36.397822 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-25 03:35:36.398580 | orchestrator | Sunday 25 May 2025 03:35:36 +0000 (0:00:00.293) 0:03:51.433 ************ 2025-05-25 03:35:36.501435 | orchestrator | ok: [testbed-manager] 2025-05-25 03:35:36.533968 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:35:36.568622 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:35:36.605100 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:35:36.686995 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:35:36.688250 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:35:36.688991 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:35:36.689881 | orchestrator | 2025-05-25 03:35:36.690882 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-25 03:35:36.691650 | orchestrator | Sunday 25 May 2025 03:35:36 +0000 (0:00:00.293) 0:03:51.726 ************ 2025-05-25 03:35:42.254105 | orchestrator | ok: [testbed-manager] 2025-05-25 03:35:42.254659 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:35:42.256191 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:35:42.256213 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:35:42.257398 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:35:42.258447 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:35:42.259133 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:35:42.259962 | orchestrator | 2025-05-25 03:35:42.260697 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-25 03:35:42.261232 | orchestrator | Sunday 25 May 2025 03:35:42 +0000 (0:00:05.565) 0:03:57.292 ************ 2025-05-25 03:35:42.664862 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:35:42.665628 | orchestrator | 2025-05-25 03:35:42.666237 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-25 03:35:42.667598 | orchestrator | Sunday 25 May 2025 03:35:42 +0000 (0:00:00.411) 0:03:57.704 ************ 2025-05-25 03:35:42.755589 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-25 03:35:42.755693 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-25 03:35:42.756151 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-25 03:35:42.756323 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-25 03:35:42.789602 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:35:42.834127 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-25 03:35:42.836682 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:35:42.837471 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-25 03:35:42.838291 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-25 03:35:42.889161 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-25 03:35:42.889277 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:35:42.889870 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-25 03:35:42.890635 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-25 03:35:42.923884 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:35:43.022322 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-25 03:35:43.022378 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:35:43.023375 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-25 03:35:43.024217 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:35:43.024454 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-25 03:35:43.025407 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-25 03:35:43.026326 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:35:43.027359 | orchestrator | 2025-05-25 03:35:43.028183 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-25 03:35:43.028389 | orchestrator | Sunday 25 May 2025 03:35:43 +0000 (0:00:00.357) 0:03:58.061 ************ 2025-05-25 03:35:43.432652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:35:43.432843 | orchestrator | 2025-05-25 03:35:43.434164 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-25 03:35:43.434805 | orchestrator | Sunday 25 May 2025 03:35:43 +0000 (0:00:00.410) 0:03:58.471 ************ 2025-05-25 03:35:43.501111 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-25 03:35:43.534912 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:35:43.535033 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-25 03:35:43.575686 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-25 03:35:43.576950 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:35:43.613374 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:35:43.613926 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-25 03:35:43.649591 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-25 03:35:43.650464 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:35:43.715934 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-25 03:35:43.717146 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:35:43.718269 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:35:43.719598 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-25 03:35:43.719924 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:35:43.720871 | orchestrator | 2025-05-25 03:35:43.722393 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-25 03:35:43.723092 | orchestrator | Sunday 25 May 2025 03:35:43 +0000 (0:00:00.282) 0:03:58.754 ************ 2025-05-25 03:35:44.204307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:35:44.204861 | orchestrator | 2025-05-25 03:35:44.205738 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-25 03:35:44.207213 | orchestrator | Sunday 25 May 2025 03:35:44 +0000 (0:00:00.488) 0:03:59.243 ************ 2025-05-25 03:36:16.789373 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:36:16.789494 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:36:16.789511 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:36:16.790649 | orchestrator | changed: [testbed-manager] 2025-05-25 03:36:16.792048 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:36:16.794217 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:36:16.794927 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:36:16.794986 | orchestrator | 2025-05-25 03:36:16.796009 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-25 03:36:16.796093 | orchestrator | Sunday 25 May 2025 03:36:16 +0000 (0:00:32.581) 0:04:31.824 ************ 2025-05-25 03:36:24.003911 | orchestrator | changed: [testbed-manager] 2025-05-25 03:36:24.004329 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:36:24.004377 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:36:24.004705 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:36:24.006602 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:36:24.007511 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:36:24.009548 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:36:24.010743 | orchestrator | 2025-05-25 03:36:24.011318 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-25 03:36:24.011931 | orchestrator | Sunday 25 May 2025 03:36:23 +0000 (0:00:07.214) 0:04:39.039 ************ 2025-05-25 03:36:31.092937 | orchestrator | changed: [testbed-manager] 2025-05-25 03:36:31.093601 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:36:31.101685 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:36:31.101741 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:36:31.102993 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:36:31.103718 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:36:31.104923 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:36:31.105396 | orchestrator | 2025-05-25 03:36:31.105953 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-25 03:36:31.106501 | orchestrator | Sunday 25 May 2025 03:36:31 +0000 (0:00:07.087) 0:04:46.126 ************ 2025-05-25 03:36:32.652285 | orchestrator | ok: [testbed-manager] 2025-05-25 03:36:32.652358 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:36:32.652384 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:36:32.653929 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:36:32.653951 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:36:32.653955 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:36:32.653959 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:36:32.654096 | orchestrator | 2025-05-25 03:36:32.654766 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-25 03:36:32.655582 | orchestrator | Sunday 25 May 2025 03:36:32 +0000 (0:00:01.559) 0:04:47.686 ************ 2025-05-25 03:36:37.974515 | orchestrator | changed: [testbed-manager] 2025-05-25 03:36:37.974628 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:36:37.974893 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:36:37.974917 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:36:37.975261 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:36:37.975596 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:36:37.977441 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:36:37.978415 | orchestrator | 2025-05-25 03:36:37.978444 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-25 03:36:37.978459 | orchestrator | Sunday 25 May 2025 03:36:37 +0000 (0:00:05.323) 0:04:53.009 ************ 2025-05-25 03:36:38.434766 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:36:38.438377 | orchestrator | 2025-05-25 03:36:38.438435 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-25 03:36:38.439037 | orchestrator | Sunday 25 May 2025 03:36:38 +0000 (0:00:00.462) 0:04:53.472 ************ 2025-05-25 03:36:39.152287 | orchestrator | changed: [testbed-manager] 2025-05-25 03:36:39.152723 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:36:39.153857 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:36:39.155322 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:36:39.155997 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:36:39.156759 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:36:39.157204 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:36:39.157744 | orchestrator | 2025-05-25 03:36:39.158508 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-25 03:36:39.158989 | orchestrator | Sunday 25 May 2025 03:36:39 +0000 (0:00:00.717) 0:04:54.189 ************ 2025-05-25 03:36:40.664206 | orchestrator | ok: [testbed-manager] 2025-05-25 03:36:40.665996 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:36:40.666690 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:36:40.667409 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:36:40.668775 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:36:40.669641 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:36:40.670298 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:36:40.672658 | orchestrator | 2025-05-25 03:36:40.673653 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-25 03:36:40.674448 | orchestrator | Sunday 25 May 2025 03:36:40 +0000 (0:00:01.511) 0:04:55.701 ************ 2025-05-25 03:36:41.417016 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:36:41.417985 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:36:41.418399 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:36:41.418712 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:36:41.419810 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:36:41.420555 | orchestrator | changed: [testbed-manager] 2025-05-25 03:36:41.421264 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:36:41.422159 | orchestrator | 2025-05-25 03:36:41.422781 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-25 03:36:41.423878 | orchestrator | Sunday 25 May 2025 03:36:41 +0000 (0:00:00.753) 0:04:56.455 ************ 2025-05-25 03:36:41.485943 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:36:41.514290 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:36:41.543259 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:36:41.573302 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:36:41.603428 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:36:41.660118 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:36:41.661288 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:36:41.661331 | orchestrator | 2025-05-25 03:36:41.661404 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-25 03:36:41.661986 | orchestrator | Sunday 25 May 2025 03:36:41 +0000 (0:00:00.243) 0:04:56.699 ************ 2025-05-25 03:36:41.742677 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:36:41.776165 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:36:41.810769 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:36:41.846103 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:36:41.877102 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:36:42.068918 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:36:42.069021 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:36:42.070131 | orchestrator | 2025-05-25 03:36:42.070744 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-25 03:36:42.071817 | orchestrator | Sunday 25 May 2025 03:36:42 +0000 (0:00:00.408) 0:04:57.108 ************ 2025-05-25 03:36:42.200626 | orchestrator | ok: [testbed-manager] 2025-05-25 03:36:42.247116 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:36:42.280119 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:36:42.318113 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:36:42.398860 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:36:42.400671 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:36:42.401169 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:36:42.402464 | orchestrator | 2025-05-25 03:36:42.403235 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-25 03:36:42.403903 | orchestrator | Sunday 25 May 2025 03:36:42 +0000 (0:00:00.330) 0:04:57.439 ************ 2025-05-25 03:36:42.502329 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:36:42.552254 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:36:42.587276 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:36:42.624696 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:36:42.683284 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:36:42.684827 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:36:42.685552 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:36:42.686857 | orchestrator | 2025-05-25 03:36:42.688182 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-25 03:36:42.689348 | orchestrator | Sunday 25 May 2025 03:36:42 +0000 (0:00:00.283) 0:04:57.722 ************ 2025-05-25 03:36:42.795213 | orchestrator | ok: [testbed-manager] 2025-05-25 03:36:42.832203 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:36:42.864053 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:36:42.898690 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:36:42.992419 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:36:42.993225 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:36:42.993788 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:36:42.994668 | orchestrator | 2025-05-25 03:36:42.995264 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-05-25 03:36:42.995781 | orchestrator | Sunday 25 May 2025 03:36:42 +0000 (0:00:00.308) 0:04:58.031 ************ 2025-05-25 03:36:43.209205 | orchestrator | ok: [testbed-manager] =>  2025-05-25 03:36:43.209634 | orchestrator |  docker_version: 5:27.5.1 2025-05-25 03:36:43.241437 | orchestrator | ok: [testbed-node-3] =>  2025-05-25 03:36:43.241989 | orchestrator |  docker_version: 5:27.5.1 2025-05-25 03:36:43.277043 | orchestrator | ok: [testbed-node-4] =>  2025-05-25 03:36:43.277475 | orchestrator |  docker_version: 5:27.5.1 2025-05-25 03:36:43.310479 | orchestrator | ok: [testbed-node-5] =>  2025-05-25 03:36:43.310929 | orchestrator |  docker_version: 5:27.5.1 2025-05-25 03:36:43.381483 | orchestrator | ok: [testbed-node-0] =>  2025-05-25 03:36:43.384044 | orchestrator |  docker_version: 5:27.5.1 2025-05-25 03:36:43.384493 | orchestrator | ok: [testbed-node-1] =>  2025-05-25 03:36:43.385215 | orchestrator |  docker_version: 5:27.5.1 2025-05-25 03:36:43.386600 | orchestrator | ok: [testbed-node-2] =>  2025-05-25 03:36:43.387284 | orchestrator |  docker_version: 5:27.5.1 2025-05-25 03:36:43.389093 | orchestrator | 2025-05-25 03:36:43.389121 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-05-25 03:36:43.389134 | orchestrator | Sunday 25 May 2025 03:36:43 +0000 (0:00:00.388) 0:04:58.420 ************ 2025-05-25 03:36:43.490543 | orchestrator | ok: [testbed-manager] =>  2025-05-25 03:36:43.490785 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-25 03:36:43.522188 | orchestrator | ok: [testbed-node-3] =>  2025-05-25 03:36:43.522846 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-25 03:36:43.552304 | orchestrator | ok: [testbed-node-4] =>  2025-05-25 03:36:43.552789 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-25 03:36:43.581246 | orchestrator | ok: [testbed-node-5] =>  2025-05-25 03:36:43.582493 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-25 03:36:43.639383 | orchestrator | ok: [testbed-node-0] =>  2025-05-25 03:36:43.640351 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-25 03:36:43.642635 | orchestrator | ok: [testbed-node-1] =>  2025-05-25 03:36:43.642659 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-25 03:36:43.643827 | orchestrator | ok: [testbed-node-2] =>  2025-05-25 03:36:43.644966 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-25 03:36:43.645263 | orchestrator | 2025-05-25 03:36:43.646434 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-25 03:36:43.646897 | orchestrator | Sunday 25 May 2025 03:36:43 +0000 (0:00:00.258) 0:04:58.679 ************ 2025-05-25 03:36:43.716705 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:36:43.749377 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:36:43.783348 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:36:43.811600 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:36:43.848197 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:36:43.908971 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:36:43.909534 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:36:43.910668 | orchestrator | 2025-05-25 03:36:43.911794 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-25 03:36:43.912497 | orchestrator | Sunday 25 May 2025 03:36:43 +0000 (0:00:00.269) 0:04:58.948 ************ 2025-05-25 03:36:43.971525 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:36:44.001654 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:36:44.030840 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:36:44.061154 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:36:44.188713 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:36:44.191691 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:36:44.192684 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:36:44.193472 | orchestrator | 2025-05-25 03:36:44.194425 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-25 03:36:44.195874 | orchestrator | Sunday 25 May 2025 03:36:44 +0000 (0:00:00.277) 0:04:59.226 ************ 2025-05-25 03:36:44.608836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:36:44.609125 | orchestrator | 2025-05-25 03:36:44.610271 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-25 03:36:44.610631 | orchestrator | Sunday 25 May 2025 03:36:44 +0000 (0:00:00.409) 0:04:59.635 ************ 2025-05-25 03:36:45.413761 | orchestrator | ok: [testbed-manager] 2025-05-25 03:36:45.413876 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:36:45.413892 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:36:45.414387 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:36:45.415002 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:36:45.415685 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:36:45.416138 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:36:45.416576 | orchestrator | 2025-05-25 03:36:45.417083 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-25 03:36:45.417863 | orchestrator | Sunday 25 May 2025 03:36:45 +0000 (0:00:00.813) 0:05:00.449 ************ 2025-05-25 03:36:48.050265 | orchestrator | ok: [testbed-manager] 2025-05-25 03:36:48.050560 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:36:48.054390 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:36:48.054797 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:36:48.055640 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:36:48.056500 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:36:48.057465 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:36:48.057826 | orchestrator | 2025-05-25 03:36:48.060022 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-25 03:36:48.060242 | orchestrator | Sunday 25 May 2025 03:36:48 +0000 (0:00:02.638) 0:05:03.088 ************ 2025-05-25 03:36:48.114597 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-25 03:36:48.191127 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-25 03:36:48.191278 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-25 03:36:48.191553 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-25 03:36:48.194952 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-25 03:36:48.194979 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-25 03:36:48.414830 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:36:48.414926 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-25 03:36:48.499934 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-25 03:36:48.501032 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:36:48.505121 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-25 03:36:48.505147 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-25 03:36:48.505159 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-25 03:36:48.505170 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-25 03:36:48.570842 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:36:48.571306 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-25 03:36:48.574279 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-25 03:36:48.575207 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-25 03:36:48.655788 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:36:48.656752 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-25 03:36:48.659982 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-25 03:36:48.660007 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-25 03:36:48.783411 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:36:48.784688 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:36:48.785160 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-25 03:36:48.788514 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-25 03:36:48.788543 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-25 03:36:48.788555 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:36:48.788568 | orchestrator | 2025-05-25 03:36:48.788949 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-25 03:36:48.789393 | orchestrator | Sunday 25 May 2025 03:36:48 +0000 (0:00:00.733) 0:05:03.822 ************ 2025-05-25 03:36:54.755395 | orchestrator | ok: [testbed-manager] 2025-05-25 03:36:54.755599 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:36:54.756185 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:36:54.757832 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:36:54.759377 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:36:54.759891 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:36:54.760773 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:36:54.761418 | orchestrator | 2025-05-25 03:36:54.761847 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-25 03:36:54.762418 | orchestrator | Sunday 25 May 2025 03:36:54 +0000 (0:00:05.969) 0:05:09.791 ************ 2025-05-25 03:36:55.788300 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:36:55.789241 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:36:55.790694 | orchestrator | ok: [testbed-manager] 2025-05-25 03:36:55.792033 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:36:55.793180 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:36:55.793507 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:36:55.793870 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:36:55.794724 | orchestrator | 2025-05-25 03:36:55.794776 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-25 03:36:55.794882 | orchestrator | Sunday 25 May 2025 03:36:55 +0000 (0:00:01.036) 0:05:10.828 ************ 2025-05-25 03:37:02.749920 | orchestrator | ok: [testbed-manager] 2025-05-25 03:37:02.751527 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:37:02.754851 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:37:02.755886 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:37:02.758268 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:37:02.758541 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:37:02.759460 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:37:02.760024 | orchestrator | 2025-05-25 03:37:02.761450 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-25 03:37:02.761876 | orchestrator | Sunday 25 May 2025 03:37:02 +0000 (0:00:06.956) 0:05:17.784 ************ 2025-05-25 03:37:05.801831 | orchestrator | changed: [testbed-manager] 2025-05-25 03:37:05.801996 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:37:05.802444 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:37:05.803394 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:37:05.804034 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:37:05.804873 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:37:05.805544 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:37:05.806113 | orchestrator | 2025-05-25 03:37:05.807174 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-25 03:37:05.807490 | orchestrator | Sunday 25 May 2025 03:37:05 +0000 (0:00:03.053) 0:05:20.837 ************ 2025-05-25 03:37:07.258364 | orchestrator | ok: [testbed-manager] 2025-05-25 03:37:07.258470 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:37:07.258485 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:37:07.258498 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:37:07.258569 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:37:07.259375 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:37:07.259876 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:37:07.260318 | orchestrator | 2025-05-25 03:37:07.260868 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-25 03:37:07.261422 | orchestrator | Sunday 25 May 2025 03:37:07 +0000 (0:00:01.454) 0:05:22.292 ************ 2025-05-25 03:37:08.566135 | orchestrator | ok: [testbed-manager] 2025-05-25 03:37:08.566546 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:37:08.567432 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:37:08.570301 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:37:08.570590 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:37:08.570891 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:37:08.571350 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:37:08.571681 | orchestrator | 2025-05-25 03:37:08.572388 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-25 03:37:08.572622 | orchestrator | Sunday 25 May 2025 03:37:08 +0000 (0:00:01.308) 0:05:23.601 ************ 2025-05-25 03:37:08.770292 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:37:08.836675 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:37:08.906715 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:37:08.983130 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:37:09.204871 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:37:09.210570 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:37:09.210606 | orchestrator | changed: [testbed-manager] 2025-05-25 03:37:09.210650 | orchestrator | 2025-05-25 03:37:09.210664 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-25 03:37:09.210677 | orchestrator | Sunday 25 May 2025 03:37:09 +0000 (0:00:00.640) 0:05:24.242 ************ 2025-05-25 03:37:18.133231 | orchestrator | ok: [testbed-manager] 2025-05-25 03:37:18.133466 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:37:18.134300 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:37:18.136047 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:37:18.137826 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:37:18.139230 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:37:18.140303 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:37:18.140992 | orchestrator | 2025-05-25 03:37:18.141837 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-25 03:37:18.142563 | orchestrator | Sunday 25 May 2025 03:37:18 +0000 (0:00:08.928) 0:05:33.171 ************ 2025-05-25 03:37:19.228590 | orchestrator | changed: [testbed-manager] 2025-05-25 03:37:19.228692 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:37:19.228707 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:37:19.228845 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:37:19.229345 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:37:19.230871 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:37:19.235724 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:37:19.235750 | orchestrator | 2025-05-25 03:37:19.235930 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-25 03:37:19.237837 | orchestrator | Sunday 25 May 2025 03:37:19 +0000 (0:00:01.093) 0:05:34.264 ************ 2025-05-25 03:37:27.224985 | orchestrator | ok: [testbed-manager] 2025-05-25 03:37:27.225149 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:37:27.225903 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:37:27.226527 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:37:27.227110 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:37:27.228611 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:37:27.229265 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:37:27.229973 | orchestrator | 2025-05-25 03:37:27.231618 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-25 03:37:27.232971 | orchestrator | Sunday 25 May 2025 03:37:27 +0000 (0:00:07.990) 0:05:42.255 ************ 2025-05-25 03:37:37.078654 | orchestrator | ok: [testbed-manager] 2025-05-25 03:37:37.079318 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:37:37.081873 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:37:37.082755 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:37:37.083680 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:37:37.084824 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:37:37.085746 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:37:37.086798 | orchestrator | 2025-05-25 03:37:37.087161 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-25 03:37:37.088017 | orchestrator | Sunday 25 May 2025 03:37:37 +0000 (0:00:09.861) 0:05:52.117 ************ 2025-05-25 03:37:37.476312 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-25 03:37:38.230824 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-25 03:37:38.231240 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-25 03:37:38.232458 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-25 03:37:38.233519 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-25 03:37:38.234366 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-25 03:37:38.235794 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-25 03:37:38.236790 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-25 03:37:38.237686 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-25 03:37:38.238582 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-25 03:37:38.239394 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-25 03:37:38.240116 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-25 03:37:38.240935 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-25 03:37:38.241645 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-25 03:37:38.243773 | orchestrator | 2025-05-25 03:37:38.245268 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-25 03:37:38.245568 | orchestrator | Sunday 25 May 2025 03:37:38 +0000 (0:00:01.149) 0:05:53.266 ************ 2025-05-25 03:37:38.376708 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:37:38.441197 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:37:38.502673 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:37:38.571307 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:37:38.632786 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:37:38.746716 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:37:38.747384 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:37:38.748750 | orchestrator | 2025-05-25 03:37:38.750374 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-25 03:37:38.751203 | orchestrator | Sunday 25 May 2025 03:37:38 +0000 (0:00:00.517) 0:05:53.784 ************ 2025-05-25 03:37:42.578837 | orchestrator | ok: [testbed-manager] 2025-05-25 03:37:42.579516 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:37:42.583818 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:37:42.583924 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:37:42.583941 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:37:42.583953 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:37:42.583964 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:37:42.584677 | orchestrator | 2025-05-25 03:37:42.584946 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-25 03:37:42.585626 | orchestrator | Sunday 25 May 2025 03:37:42 +0000 (0:00:03.830) 0:05:57.615 ************ 2025-05-25 03:37:42.710699 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:37:42.794356 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:37:42.859302 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:37:42.925341 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:37:42.997138 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:37:43.118960 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:37:43.119923 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:37:43.120688 | orchestrator | 2025-05-25 03:37:43.121467 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-25 03:37:43.122445 | orchestrator | Sunday 25 May 2025 03:37:43 +0000 (0:00:00.541) 0:05:58.156 ************ 2025-05-25 03:37:43.187939 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-25 03:37:43.188137 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-25 03:37:43.274321 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:37:43.274988 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-25 03:37:43.275708 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-25 03:37:43.349765 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:37:43.351253 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-25 03:37:43.351287 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-25 03:37:43.418265 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:37:43.418434 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-25 03:37:43.419525 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-25 03:37:43.497634 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:37:43.499249 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-25 03:37:43.499622 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-25 03:37:43.577295 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:37:43.577923 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-25 03:37:43.578605 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-25 03:37:43.703763 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:37:43.704659 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-25 03:37:43.705698 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-25 03:37:43.706252 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:37:43.707363 | orchestrator | 2025-05-25 03:37:43.707605 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-25 03:37:43.708476 | orchestrator | Sunday 25 May 2025 03:37:43 +0000 (0:00:00.585) 0:05:58.741 ************ 2025-05-25 03:37:43.836909 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:37:43.898925 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:37:43.969007 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:37:44.031045 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:37:44.093207 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:37:44.211312 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:37:44.211986 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:37:44.212424 | orchestrator | 2025-05-25 03:37:44.213007 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-25 03:37:44.216029 | orchestrator | Sunday 25 May 2025 03:37:44 +0000 (0:00:00.508) 0:05:59.250 ************ 2025-05-25 03:37:44.335620 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:37:44.404522 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:37:44.468288 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:37:44.528634 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:37:44.597803 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:37:44.696463 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:37:44.696568 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:37:44.697140 | orchestrator | 2025-05-25 03:37:44.697688 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-25 03:37:44.698403 | orchestrator | Sunday 25 May 2025 03:37:44 +0000 (0:00:00.483) 0:05:59.734 ************ 2025-05-25 03:37:44.839965 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:37:45.089591 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:37:45.156377 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:37:45.216358 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:37:45.288444 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:37:45.407485 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:37:45.407925 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:37:45.409535 | orchestrator | 2025-05-25 03:37:45.410424 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-25 03:37:45.411625 | orchestrator | Sunday 25 May 2025 03:37:45 +0000 (0:00:00.710) 0:06:00.445 ************ 2025-05-25 03:37:47.003402 | orchestrator | ok: [testbed-manager] 2025-05-25 03:37:47.003507 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:37:47.003521 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:37:47.003624 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:37:47.003673 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:37:47.004291 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:37:47.005519 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:37:47.005613 | orchestrator | 2025-05-25 03:37:47.005631 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-25 03:37:47.006818 | orchestrator | Sunday 25 May 2025 03:37:46 +0000 (0:00:01.594) 0:06:02.039 ************ 2025-05-25 03:37:47.835201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:37:47.835335 | orchestrator | 2025-05-25 03:37:47.835531 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-25 03:37:47.837138 | orchestrator | Sunday 25 May 2025 03:37:47 +0000 (0:00:00.835) 0:06:02.875 ************ 2025-05-25 03:37:48.235667 | orchestrator | ok: [testbed-manager] 2025-05-25 03:37:48.822408 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:37:48.822539 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:37:48.823908 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:37:48.824741 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:37:48.827090 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:37:48.827190 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:37:48.827948 | orchestrator | 2025-05-25 03:37:48.828898 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-25 03:37:48.829449 | orchestrator | Sunday 25 May 2025 03:37:48 +0000 (0:00:00.985) 0:06:03.860 ************ 2025-05-25 03:37:49.260656 | orchestrator | ok: [testbed-manager] 2025-05-25 03:37:49.676290 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:37:49.677160 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:37:49.680929 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:37:49.680980 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:37:49.680990 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:37:49.681562 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:37:49.681951 | orchestrator | 2025-05-25 03:37:49.682614 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-25 03:37:49.683389 | orchestrator | Sunday 25 May 2025 03:37:49 +0000 (0:00:00.853) 0:06:04.713 ************ 2025-05-25 03:37:50.992718 | orchestrator | ok: [testbed-manager] 2025-05-25 03:37:50.992834 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:37:50.992939 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:37:50.993493 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:37:50.994621 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:37:50.994648 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:37:50.995558 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:37:50.995782 | orchestrator | 2025-05-25 03:37:50.996735 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-25 03:37:50.996836 | orchestrator | Sunday 25 May 2025 03:37:50 +0000 (0:00:01.313) 0:06:06.027 ************ 2025-05-25 03:37:51.124272 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:37:52.308496 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:37:52.310269 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:37:52.310450 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:37:52.313169 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:37:52.313860 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:37:52.314811 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:37:52.315660 | orchestrator | 2025-05-25 03:37:52.316586 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-25 03:37:52.317579 | orchestrator | Sunday 25 May 2025 03:37:52 +0000 (0:00:01.318) 0:06:07.346 ************ 2025-05-25 03:37:53.605886 | orchestrator | ok: [testbed-manager] 2025-05-25 03:37:53.606416 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:37:53.607426 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:37:53.607902 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:37:53.609046 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:37:53.609579 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:37:53.610274 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:37:53.610938 | orchestrator | 2025-05-25 03:37:53.611522 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-25 03:37:53.612214 | orchestrator | Sunday 25 May 2025 03:37:53 +0000 (0:00:01.296) 0:06:08.642 ************ 2025-05-25 03:37:55.177963 | orchestrator | changed: [testbed-manager] 2025-05-25 03:37:55.178342 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:37:55.178370 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:37:55.178382 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:37:55.178692 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:37:55.179015 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:37:55.179392 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:37:55.179770 | orchestrator | 2025-05-25 03:37:55.180318 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-25 03:37:55.180401 | orchestrator | Sunday 25 May 2025 03:37:55 +0000 (0:00:01.573) 0:06:10.216 ************ 2025-05-25 03:37:56.027111 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:37:56.027426 | orchestrator | 2025-05-25 03:37:56.029110 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-25 03:37:56.034952 | orchestrator | Sunday 25 May 2025 03:37:56 +0000 (0:00:00.848) 0:06:11.064 ************ 2025-05-25 03:37:57.344608 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:37:57.344825 | orchestrator | ok: [testbed-manager] 2025-05-25 03:37:57.345448 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:37:57.346194 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:37:57.346813 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:37:57.350540 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:37:57.352829 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:37:57.352878 | orchestrator | 2025-05-25 03:37:57.352893 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-25 03:37:57.352905 | orchestrator | Sunday 25 May 2025 03:37:57 +0000 (0:00:01.316) 0:06:12.381 ************ 2025-05-25 03:37:58.449994 | orchestrator | ok: [testbed-manager] 2025-05-25 03:37:58.450274 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:37:58.451108 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:37:58.455453 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:37:58.455491 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:37:58.455502 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:37:58.455514 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:37:58.455526 | orchestrator | 2025-05-25 03:37:58.455863 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-25 03:37:58.456235 | orchestrator | Sunday 25 May 2025 03:37:58 +0000 (0:00:01.107) 0:06:13.488 ************ 2025-05-25 03:37:59.758457 | orchestrator | ok: [testbed-manager] 2025-05-25 03:37:59.761282 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:37:59.761314 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:37:59.761326 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:37:59.761337 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:37:59.762011 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:37:59.762501 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:37:59.763623 | orchestrator | 2025-05-25 03:37:59.764960 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-25 03:37:59.765812 | orchestrator | Sunday 25 May 2025 03:37:59 +0000 (0:00:01.304) 0:06:14.793 ************ 2025-05-25 03:38:00.860576 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:00.861289 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:38:00.862176 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:38:00.864294 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:38:00.865773 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:38:00.866256 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:38:00.866875 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:38:00.867952 | orchestrator | 2025-05-25 03:38:00.868598 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-25 03:38:00.869202 | orchestrator | Sunday 25 May 2025 03:38:00 +0000 (0:00:01.103) 0:06:15.896 ************ 2025-05-25 03:38:02.192933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:38:02.193035 | orchestrator | 2025-05-25 03:38:02.193052 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-25 03:38:02.195918 | orchestrator | Sunday 25 May 2025 03:38:01 +0000 (0:00:00.871) 0:06:16.768 ************ 2025-05-25 03:38:02.196800 | orchestrator | 2025-05-25 03:38:02.197275 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-25 03:38:02.198436 | orchestrator | Sunday 25 May 2025 03:38:01 +0000 (0:00:00.036) 0:06:16.804 ************ 2025-05-25 03:38:02.198843 | orchestrator | 2025-05-25 03:38:02.199857 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-25 03:38:02.200709 | orchestrator | Sunday 25 May 2025 03:38:01 +0000 (0:00:00.036) 0:06:16.841 ************ 2025-05-25 03:38:02.201691 | orchestrator | 2025-05-25 03:38:02.202571 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-25 03:38:02.202995 | orchestrator | Sunday 25 May 2025 03:38:01 +0000 (0:00:00.043) 0:06:16.885 ************ 2025-05-25 03:38:02.203732 | orchestrator | 2025-05-25 03:38:02.205216 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-25 03:38:02.205303 | orchestrator | Sunday 25 May 2025 03:38:01 +0000 (0:00:00.050) 0:06:16.935 ************ 2025-05-25 03:38:02.205996 | orchestrator | 2025-05-25 03:38:02.206721 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-25 03:38:02.207327 | orchestrator | Sunday 25 May 2025 03:38:01 +0000 (0:00:00.037) 0:06:16.973 ************ 2025-05-25 03:38:02.207901 | orchestrator | 2025-05-25 03:38:02.208441 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-25 03:38:02.208934 | orchestrator | Sunday 25 May 2025 03:38:02 +0000 (0:00:00.213) 0:06:17.186 ************ 2025-05-25 03:38:02.209789 | orchestrator | 2025-05-25 03:38:02.210888 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-25 03:38:02.211486 | orchestrator | Sunday 25 May 2025 03:38:02 +0000 (0:00:00.039) 0:06:17.225 ************ 2025-05-25 03:38:03.248197 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:38:03.249575 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:38:03.250579 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:38:03.251031 | orchestrator | 2025-05-25 03:38:03.252750 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-25 03:38:03.253031 | orchestrator | Sunday 25 May 2025 03:38:03 +0000 (0:00:01.058) 0:06:18.283 ************ 2025-05-25 03:38:04.520794 | orchestrator | changed: [testbed-manager] 2025-05-25 03:38:04.521221 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:38:04.522535 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:38:04.523947 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:38:04.525679 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:38:04.526219 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:38:04.526672 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:38:04.528342 | orchestrator | 2025-05-25 03:38:04.530385 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-25 03:38:04.530622 | orchestrator | Sunday 25 May 2025 03:38:04 +0000 (0:00:01.273) 0:06:19.556 ************ 2025-05-25 03:38:05.598617 | orchestrator | changed: [testbed-manager] 2025-05-25 03:38:05.599130 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:38:05.600871 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:38:05.601223 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:38:05.602206 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:38:05.603923 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:38:05.604607 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:38:05.605346 | orchestrator | 2025-05-25 03:38:05.605954 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-25 03:38:05.606655 | orchestrator | Sunday 25 May 2025 03:38:05 +0000 (0:00:01.079) 0:06:20.636 ************ 2025-05-25 03:38:05.727655 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:38:07.730721 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:38:07.730901 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:38:07.731727 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:38:07.732598 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:38:07.732962 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:38:07.734918 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:38:07.735305 | orchestrator | 2025-05-25 03:38:07.736298 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-25 03:38:07.736951 | orchestrator | Sunday 25 May 2025 03:38:07 +0000 (0:00:02.130) 0:06:22.766 ************ 2025-05-25 03:38:07.847796 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:38:07.850902 | orchestrator | 2025-05-25 03:38:07.852051 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-25 03:38:07.856392 | orchestrator | Sunday 25 May 2025 03:38:07 +0000 (0:00:00.118) 0:06:22.885 ************ 2025-05-25 03:38:09.020830 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:09.021007 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:38:09.021789 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:38:09.024271 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:38:09.024423 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:38:09.025928 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:38:09.027446 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:38:09.027992 | orchestrator | 2025-05-25 03:38:09.028890 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-25 03:38:09.029867 | orchestrator | Sunday 25 May 2025 03:38:09 +0000 (0:00:01.171) 0:06:24.056 ************ 2025-05-25 03:38:09.155104 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:38:09.219581 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:38:09.290832 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:38:09.356960 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:38:09.419431 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:38:09.549740 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:38:09.550342 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:38:09.551299 | orchestrator | 2025-05-25 03:38:09.552117 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-25 03:38:09.556109 | orchestrator | Sunday 25 May 2025 03:38:09 +0000 (0:00:00.530) 0:06:24.587 ************ 2025-05-25 03:38:10.488471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:38:10.489092 | orchestrator | 2025-05-25 03:38:10.492893 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-25 03:38:10.492929 | orchestrator | Sunday 25 May 2025 03:38:10 +0000 (0:00:00.939) 0:06:25.526 ************ 2025-05-25 03:38:10.918667 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:11.331739 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:38:11.333221 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:38:11.333964 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:38:11.335840 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:38:11.335906 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:38:11.336654 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:38:11.337517 | orchestrator | 2025-05-25 03:38:11.337959 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-25 03:38:11.338696 | orchestrator | Sunday 25 May 2025 03:38:11 +0000 (0:00:00.842) 0:06:26.369 ************ 2025-05-25 03:38:13.873337 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-25 03:38:13.873937 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-25 03:38:13.874985 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-25 03:38:13.880411 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-25 03:38:13.880665 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-25 03:38:13.882349 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-25 03:38:13.885267 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-25 03:38:13.886207 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-25 03:38:13.886952 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-25 03:38:13.887466 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-25 03:38:13.888634 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-25 03:38:13.889325 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-25 03:38:13.889742 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-25 03:38:13.890560 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-25 03:38:13.891220 | orchestrator | 2025-05-25 03:38:13.891436 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-25 03:38:13.892248 | orchestrator | Sunday 25 May 2025 03:38:13 +0000 (0:00:02.540) 0:06:28.910 ************ 2025-05-25 03:38:14.013909 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:38:14.075695 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:38:14.139576 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:38:14.210649 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:38:14.271389 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:38:14.377434 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:38:14.377545 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:38:14.377865 | orchestrator | 2025-05-25 03:38:14.378279 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-25 03:38:14.378628 | orchestrator | Sunday 25 May 2025 03:38:14 +0000 (0:00:00.505) 0:06:29.415 ************ 2025-05-25 03:38:15.183277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:38:15.183623 | orchestrator | 2025-05-25 03:38:15.186877 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-25 03:38:15.186908 | orchestrator | Sunday 25 May 2025 03:38:15 +0000 (0:00:00.805) 0:06:30.221 ************ 2025-05-25 03:38:15.592330 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:15.654486 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:38:16.205236 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:38:16.205342 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:38:16.205355 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:38:16.206341 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:38:16.206367 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:38:16.206853 | orchestrator | 2025-05-25 03:38:16.207972 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-25 03:38:16.208719 | orchestrator | Sunday 25 May 2025 03:38:16 +0000 (0:00:01.015) 0:06:31.236 ************ 2025-05-25 03:38:16.608661 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:16.998673 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:38:16.999231 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:38:16.999946 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:38:17.000685 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:38:17.001554 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:38:17.002135 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:38:17.002542 | orchestrator | 2025-05-25 03:38:17.003211 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-25 03:38:17.003711 | orchestrator | Sunday 25 May 2025 03:38:16 +0000 (0:00:00.801) 0:06:32.037 ************ 2025-05-25 03:38:17.126279 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:38:17.198412 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:38:17.266235 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:38:17.324876 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:38:17.400922 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:38:17.490486 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:38:17.490655 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:38:17.491514 | orchestrator | 2025-05-25 03:38:17.491935 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-25 03:38:17.492489 | orchestrator | Sunday 25 May 2025 03:38:17 +0000 (0:00:00.490) 0:06:32.528 ************ 2025-05-25 03:38:18.806431 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:18.806593 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:38:18.808731 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:38:18.810212 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:38:18.810877 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:38:18.812020 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:38:18.813511 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:38:18.814575 | orchestrator | 2025-05-25 03:38:18.815283 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-25 03:38:18.816041 | orchestrator | Sunday 25 May 2025 03:38:18 +0000 (0:00:01.313) 0:06:33.841 ************ 2025-05-25 03:38:18.932532 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:38:18.999405 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:38:19.063057 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:38:19.129645 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:38:19.194360 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:38:19.485577 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:38:19.486555 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:38:19.487714 | orchestrator | 2025-05-25 03:38:19.488599 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-25 03:38:19.491815 | orchestrator | Sunday 25 May 2025 03:38:19 +0000 (0:00:00.682) 0:06:34.524 ************ 2025-05-25 03:38:26.459850 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:26.460119 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:38:26.462781 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:38:26.462842 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:38:26.462899 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:38:26.463369 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:38:26.464513 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:38:26.464728 | orchestrator | 2025-05-25 03:38:26.465157 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-25 03:38:26.465633 | orchestrator | Sunday 25 May 2025 03:38:26 +0000 (0:00:06.970) 0:06:41.495 ************ 2025-05-25 03:38:27.731366 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:27.731512 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:38:27.731528 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:38:27.731617 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:38:27.733184 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:38:27.734235 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:38:27.735404 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:38:27.737833 | orchestrator | 2025-05-25 03:38:27.738416 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-25 03:38:27.740698 | orchestrator | Sunday 25 May 2025 03:38:27 +0000 (0:00:01.271) 0:06:42.766 ************ 2025-05-25 03:38:29.389549 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:29.389661 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:38:29.392422 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:38:29.393484 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:38:29.394520 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:38:29.394962 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:38:29.395929 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:38:29.396445 | orchestrator | 2025-05-25 03:38:29.398213 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-25 03:38:29.398255 | orchestrator | Sunday 25 May 2025 03:38:29 +0000 (0:00:01.659) 0:06:44.426 ************ 2025-05-25 03:38:31.108734 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:31.112951 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:38:31.115604 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:38:31.116781 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:38:31.117921 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:38:31.118661 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:38:31.119266 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:38:31.120046 | orchestrator | 2025-05-25 03:38:31.120656 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-25 03:38:31.121420 | orchestrator | Sunday 25 May 2025 03:38:31 +0000 (0:00:01.718) 0:06:46.145 ************ 2025-05-25 03:38:31.524570 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:31.955154 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:38:31.955751 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:38:31.956582 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:38:31.957306 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:38:31.958160 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:38:31.961068 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:38:31.961129 | orchestrator | 2025-05-25 03:38:31.961143 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-25 03:38:31.961157 | orchestrator | Sunday 25 May 2025 03:38:31 +0000 (0:00:00.849) 0:06:46.995 ************ 2025-05-25 03:38:32.080823 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:38:32.169375 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:38:32.233516 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:38:32.294599 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:38:32.371511 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:38:32.761840 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:38:32.762187 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:38:32.767045 | orchestrator | 2025-05-25 03:38:32.767177 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-25 03:38:32.767195 | orchestrator | Sunday 25 May 2025 03:38:32 +0000 (0:00:00.803) 0:06:47.798 ************ 2025-05-25 03:38:32.902204 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:38:32.963900 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:38:33.036308 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:38:33.104287 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:38:33.164644 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:38:33.263595 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:38:33.264020 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:38:33.264782 | orchestrator | 2025-05-25 03:38:33.265638 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-25 03:38:33.266734 | orchestrator | Sunday 25 May 2025 03:38:33 +0000 (0:00:00.504) 0:06:48.302 ************ 2025-05-25 03:38:33.391747 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:33.463116 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:38:33.695163 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:38:33.759496 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:38:33.821611 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:38:33.923217 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:38:33.923954 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:38:33.925497 | orchestrator | 2025-05-25 03:38:33.929179 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-25 03:38:33.929206 | orchestrator | Sunday 25 May 2025 03:38:33 +0000 (0:00:00.658) 0:06:48.961 ************ 2025-05-25 03:38:34.066642 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:34.137032 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:38:34.200580 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:38:34.266010 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:38:34.332556 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:38:34.438369 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:38:34.438891 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:38:34.440249 | orchestrator | 2025-05-25 03:38:34.441438 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-25 03:38:34.442800 | orchestrator | Sunday 25 May 2025 03:38:34 +0000 (0:00:00.514) 0:06:49.475 ************ 2025-05-25 03:38:34.571448 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:34.632552 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:38:34.705572 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:38:34.774378 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:38:34.836153 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:38:34.942232 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:38:34.942956 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:38:34.944101 | orchestrator | 2025-05-25 03:38:34.944935 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-25 03:38:34.945731 | orchestrator | Sunday 25 May 2025 03:38:34 +0000 (0:00:00.504) 0:06:49.980 ************ 2025-05-25 03:38:40.509357 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:40.510899 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:38:40.511946 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:38:40.512491 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:38:40.513254 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:38:40.513861 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:38:40.514818 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:38:40.515711 | orchestrator | 2025-05-25 03:38:40.516497 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-25 03:38:40.517642 | orchestrator | Sunday 25 May 2025 03:38:40 +0000 (0:00:05.568) 0:06:55.549 ************ 2025-05-25 03:38:40.643259 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:38:40.714458 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:38:40.774232 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:38:40.837423 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:38:41.074582 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:38:41.201789 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:38:41.202317 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:38:41.203117 | orchestrator | 2025-05-25 03:38:41.203764 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-25 03:38:41.204359 | orchestrator | Sunday 25 May 2025 03:38:41 +0000 (0:00:00.690) 0:06:56.239 ************ 2025-05-25 03:38:41.976238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:38:41.976515 | orchestrator | 2025-05-25 03:38:41.978363 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-25 03:38:41.978947 | orchestrator | Sunday 25 May 2025 03:38:41 +0000 (0:00:00.772) 0:06:57.012 ************ 2025-05-25 03:38:43.612416 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:43.612519 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:38:43.612612 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:38:43.613271 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:38:43.613333 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:38:43.613944 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:38:43.614012 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:38:43.614375 | orchestrator | 2025-05-25 03:38:43.614849 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-25 03:38:43.615192 | orchestrator | Sunday 25 May 2025 03:38:43 +0000 (0:00:01.636) 0:06:58.648 ************ 2025-05-25 03:38:44.725574 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:44.726482 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:38:44.727023 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:38:44.729275 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:38:44.730448 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:38:44.731053 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:38:44.732070 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:38:44.733006 | orchestrator | 2025-05-25 03:38:44.734162 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-25 03:38:44.736894 | orchestrator | Sunday 25 May 2025 03:38:44 +0000 (0:00:01.111) 0:06:59.760 ************ 2025-05-25 03:38:45.280960 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:45.351359 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:38:45.762800 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:38:45.763883 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:38:45.764573 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:38:45.766932 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:38:45.767850 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:38:45.768593 | orchestrator | 2025-05-25 03:38:45.769505 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-25 03:38:45.770122 | orchestrator | Sunday 25 May 2025 03:38:45 +0000 (0:00:01.039) 0:07:00.800 ************ 2025-05-25 03:38:47.383854 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-25 03:38:47.385130 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-25 03:38:47.386701 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-25 03:38:47.388261 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-25 03:38:47.388962 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-25 03:38:47.390334 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-25 03:38:47.391046 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-25 03:38:47.391495 | orchestrator | 2025-05-25 03:38:47.392575 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-25 03:38:47.393257 | orchestrator | Sunday 25 May 2025 03:38:47 +0000 (0:00:01.619) 0:07:02.420 ************ 2025-05-25 03:38:48.173845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:38:48.174569 | orchestrator | 2025-05-25 03:38:48.175275 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-25 03:38:48.176642 | orchestrator | Sunday 25 May 2025 03:38:48 +0000 (0:00:00.790) 0:07:03.211 ************ 2025-05-25 03:38:56.586412 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:38:56.586533 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:38:56.586550 | orchestrator | changed: [testbed-manager] 2025-05-25 03:38:56.587688 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:38:56.588359 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:38:56.589332 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:38:56.589795 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:38:56.590903 | orchestrator | 2025-05-25 03:38:56.591286 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-25 03:38:56.591747 | orchestrator | Sunday 25 May 2025 03:38:56 +0000 (0:00:08.408) 0:07:11.619 ************ 2025-05-25 03:38:58.262957 | orchestrator | ok: [testbed-manager] 2025-05-25 03:38:58.263364 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:38:58.266944 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:38:58.266978 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:38:58.266990 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:38:58.267107 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:38:58.268633 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:38:58.269509 | orchestrator | 2025-05-25 03:38:58.270436 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-25 03:38:58.271503 | orchestrator | Sunday 25 May 2025 03:38:58 +0000 (0:00:01.681) 0:07:13.301 ************ 2025-05-25 03:38:59.538206 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:38:59.539224 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:38:59.540382 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:38:59.541229 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:38:59.541789 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:38:59.542386 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:38:59.543132 | orchestrator | 2025-05-25 03:38:59.543679 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-25 03:38:59.544205 | orchestrator | Sunday 25 May 2025 03:38:59 +0000 (0:00:01.275) 0:07:14.576 ************ 2025-05-25 03:39:00.944735 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:39:00.945290 | orchestrator | changed: [testbed-manager] 2025-05-25 03:39:00.946572 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:39:00.947987 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:39:00.948968 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:39:00.950352 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:39:00.950908 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:39:00.951719 | orchestrator | 2025-05-25 03:39:00.952655 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-25 03:39:00.953227 | orchestrator | 2025-05-25 03:39:00.953897 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-25 03:39:00.954407 | orchestrator | Sunday 25 May 2025 03:39:00 +0000 (0:00:01.406) 0:07:15.982 ************ 2025-05-25 03:39:01.078537 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:39:01.139191 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:39:01.198963 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:39:01.262380 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:39:01.319366 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:39:01.436685 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:39:01.436862 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:39:01.437601 | orchestrator | 2025-05-25 03:39:01.438655 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-25 03:39:01.441865 | orchestrator | 2025-05-25 03:39:01.442663 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-25 03:39:01.443571 | orchestrator | Sunday 25 May 2025 03:39:01 +0000 (0:00:00.491) 0:07:16.474 ************ 2025-05-25 03:39:02.724284 | orchestrator | changed: [testbed-manager] 2025-05-25 03:39:02.724390 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:39:02.727191 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:39:02.727222 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:39:02.727233 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:39:02.727553 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:39:02.728197 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:39:02.728634 | orchestrator | 2025-05-25 03:39:02.729232 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-25 03:39:02.729802 | orchestrator | Sunday 25 May 2025 03:39:02 +0000 (0:00:01.286) 0:07:17.761 ************ 2025-05-25 03:39:04.290341 | orchestrator | ok: [testbed-manager] 2025-05-25 03:39:04.290446 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:39:04.291980 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:39:04.294546 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:39:04.295667 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:39:04.296919 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:39:04.297652 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:39:04.298301 | orchestrator | 2025-05-25 03:39:04.298852 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-25 03:39:04.299701 | orchestrator | Sunday 25 May 2025 03:39:04 +0000 (0:00:01.562) 0:07:19.324 ************ 2025-05-25 03:39:04.427651 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:39:04.491587 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:39:04.558338 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:39:04.616359 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:39:04.675617 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:39:05.046635 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:39:05.046831 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:39:05.048553 | orchestrator | 2025-05-25 03:39:05.051218 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-25 03:39:05.051247 | orchestrator | Sunday 25 May 2025 03:39:05 +0000 (0:00:00.761) 0:07:20.085 ************ 2025-05-25 03:39:06.255022 | orchestrator | changed: [testbed-manager] 2025-05-25 03:39:06.256510 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:39:06.257538 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:39:06.261774 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:39:06.262522 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:39:06.263126 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:39:06.265443 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:39:06.266166 | orchestrator | 2025-05-25 03:39:06.266563 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-25 03:39:06.267137 | orchestrator | 2025-05-25 03:39:06.268218 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-25 03:39:06.268440 | orchestrator | Sunday 25 May 2025 03:39:06 +0000 (0:00:01.207) 0:07:21.292 ************ 2025-05-25 03:39:07.190122 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:39:07.193688 | orchestrator | 2025-05-25 03:39:07.193725 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-25 03:39:07.193740 | orchestrator | Sunday 25 May 2025 03:39:07 +0000 (0:00:00.933) 0:07:22.225 ************ 2025-05-25 03:39:07.593487 | orchestrator | ok: [testbed-manager] 2025-05-25 03:39:08.013301 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:39:08.014150 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:39:08.015244 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:39:08.016495 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:39:08.017360 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:39:08.017599 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:39:08.018521 | orchestrator | 2025-05-25 03:39:08.019110 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-25 03:39:08.019527 | orchestrator | Sunday 25 May 2025 03:39:08 +0000 (0:00:00.824) 0:07:23.050 ************ 2025-05-25 03:39:09.127361 | orchestrator | changed: [testbed-manager] 2025-05-25 03:39:09.130742 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:39:09.131276 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:39:09.132263 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:39:09.132967 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:39:09.135902 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:39:09.136437 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:39:09.136963 | orchestrator | 2025-05-25 03:39:09.137551 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-25 03:39:09.138162 | orchestrator | Sunday 25 May 2025 03:39:09 +0000 (0:00:01.112) 0:07:24.163 ************ 2025-05-25 03:39:10.108269 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:39:10.109487 | orchestrator | 2025-05-25 03:39:10.112936 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-25 03:39:10.114000 | orchestrator | Sunday 25 May 2025 03:39:10 +0000 (0:00:00.981) 0:07:25.144 ************ 2025-05-25 03:39:10.933920 | orchestrator | ok: [testbed-manager] 2025-05-25 03:39:10.934430 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:39:10.935370 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:39:10.936609 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:39:10.937785 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:39:10.939228 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:39:10.939399 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:39:10.940071 | orchestrator | 2025-05-25 03:39:10.941156 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-25 03:39:10.941640 | orchestrator | Sunday 25 May 2025 03:39:10 +0000 (0:00:00.823) 0:07:25.968 ************ 2025-05-25 03:39:12.018459 | orchestrator | changed: [testbed-manager] 2025-05-25 03:39:12.018587 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:39:12.018710 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:39:12.019370 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:39:12.019898 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:39:12.021703 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:39:12.021967 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:39:12.022623 | orchestrator | 2025-05-25 03:39:12.023147 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:39:12.023462 | orchestrator | 2025-05-25 03:39:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:39:12.024229 | orchestrator | 2025-05-25 03:39:12 | INFO  | Please wait and do not abort execution. 2025-05-25 03:39:12.024760 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-25 03:39:12.025877 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-25 03:39:12.026214 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-25 03:39:12.026338 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-25 03:39:12.027244 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-25 03:39:12.027954 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-25 03:39:12.028440 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-25 03:39:12.029071 | orchestrator | 2025-05-25 03:39:12.029822 | orchestrator | 2025-05-25 03:39:12.030399 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:39:12.031267 | orchestrator | Sunday 25 May 2025 03:39:12 +0000 (0:00:01.085) 0:07:27.054 ************ 2025-05-25 03:39:12.031676 | orchestrator | =============================================================================== 2025-05-25 03:39:12.032242 | orchestrator | osism.commons.packages : Install required packages --------------------- 73.42s 2025-05-25 03:39:12.032734 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.38s 2025-05-25 03:39:12.033292 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 32.58s 2025-05-25 03:39:12.033692 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.74s 2025-05-25 03:39:12.034323 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.45s 2025-05-25 03:39:12.034716 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.13s 2025-05-25 03:39:12.035445 | orchestrator | osism.services.docker : Install docker package -------------------------- 9.86s 2025-05-25 03:39:12.035982 | orchestrator | osism.services.docker : Install containerd package ---------------------- 8.93s 2025-05-25 03:39:12.036420 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.41s 2025-05-25 03:39:12.037143 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 7.99s 2025-05-25 03:39:12.037546 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 7.81s 2025-05-25 03:39:12.038607 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.21s 2025-05-25 03:39:12.038792 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.11s 2025-05-25 03:39:12.040115 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.09s 2025-05-25 03:39:12.040682 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 6.97s 2025-05-25 03:39:12.041191 | orchestrator | osism.services.docker : Add repository ---------------------------------- 6.96s 2025-05-25 03:39:12.041708 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.97s 2025-05-25 03:39:12.042471 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.57s 2025-05-25 03:39:12.042801 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.57s 2025-05-25 03:39:12.043251 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.52s 2025-05-25 03:39:12.720735 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-25 03:39:12.720837 | orchestrator | + osism apply network 2025-05-25 03:39:14.959965 | orchestrator | 2025-05-25 03:39:14 | INFO  | Task 200c7d36-3856-49a5-892f-b39b44db091a (network) was prepared for execution. 2025-05-25 03:39:14.960166 | orchestrator | 2025-05-25 03:39:14 | INFO  | It takes a moment until task 200c7d36-3856-49a5-892f-b39b44db091a (network) has been started and output is visible here. 2025-05-25 03:39:19.200214 | orchestrator | 2025-05-25 03:39:19.204211 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-25 03:39:19.204281 | orchestrator | 2025-05-25 03:39:19.204296 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-25 03:39:19.204355 | orchestrator | Sunday 25 May 2025 03:39:19 +0000 (0:00:00.286) 0:00:00.286 ************ 2025-05-25 03:39:19.356227 | orchestrator | ok: [testbed-manager] 2025-05-25 03:39:19.439771 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:39:19.527128 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:39:19.612644 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:39:19.789012 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:39:19.924138 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:39:19.926634 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:39:19.927614 | orchestrator | 2025-05-25 03:39:19.928652 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-25 03:39:19.929425 | orchestrator | Sunday 25 May 2025 03:39:19 +0000 (0:00:00.721) 0:00:01.008 ************ 2025-05-25 03:39:21.117894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:39:21.118549 | orchestrator | 2025-05-25 03:39:21.119475 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-25 03:39:21.120642 | orchestrator | Sunday 25 May 2025 03:39:21 +0000 (0:00:01.193) 0:00:02.201 ************ 2025-05-25 03:39:22.980563 | orchestrator | ok: [testbed-manager] 2025-05-25 03:39:22.981574 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:39:22.982745 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:39:22.985966 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:39:22.987451 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:39:22.987571 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:39:22.989741 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:39:22.990332 | orchestrator | 2025-05-25 03:39:22.991651 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-25 03:39:22.994818 | orchestrator | Sunday 25 May 2025 03:39:22 +0000 (0:00:01.866) 0:00:04.067 ************ 2025-05-25 03:39:24.704595 | orchestrator | ok: [testbed-manager] 2025-05-25 03:39:24.704705 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:39:24.704719 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:39:24.705533 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:39:24.708918 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:39:24.710115 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:39:24.710865 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:39:24.711570 | orchestrator | 2025-05-25 03:39:24.712382 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-25 03:39:24.713371 | orchestrator | Sunday 25 May 2025 03:39:24 +0000 (0:00:01.720) 0:00:05.788 ************ 2025-05-25 03:39:25.216328 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-25 03:39:25.216438 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-25 03:39:25.682550 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-25 03:39:25.683468 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-25 03:39:25.684628 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-25 03:39:25.685275 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-25 03:39:25.685950 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-25 03:39:25.687002 | orchestrator | 2025-05-25 03:39:25.687497 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-25 03:39:25.688155 | orchestrator | Sunday 25 May 2025 03:39:25 +0000 (0:00:00.982) 0:00:06.771 ************ 2025-05-25 03:39:29.177321 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 03:39:29.180836 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 03:39:29.180880 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-25 03:39:29.180920 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-25 03:39:29.182323 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-25 03:39:29.182722 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-25 03:39:29.183341 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-25 03:39:29.183592 | orchestrator | 2025-05-25 03:39:29.184174 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-25 03:39:29.184684 | orchestrator | Sunday 25 May 2025 03:39:29 +0000 (0:00:03.493) 0:00:10.264 ************ 2025-05-25 03:39:30.777198 | orchestrator | changed: [testbed-manager] 2025-05-25 03:39:30.778203 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:39:30.781610 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:39:30.781634 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:39:30.781646 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:39:30.781657 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:39:30.782247 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:39:30.783213 | orchestrator | 2025-05-25 03:39:30.783835 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-25 03:39:30.784550 | orchestrator | Sunday 25 May 2025 03:39:30 +0000 (0:00:01.597) 0:00:11.862 ************ 2025-05-25 03:39:32.506857 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-25 03:39:32.507333 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 03:39:32.508804 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 03:39:32.509436 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-25 03:39:32.509804 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-25 03:39:32.510230 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-25 03:39:32.512892 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-25 03:39:32.513668 | orchestrator | 2025-05-25 03:39:32.514879 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-25 03:39:32.515942 | orchestrator | Sunday 25 May 2025 03:39:32 +0000 (0:00:01.731) 0:00:13.593 ************ 2025-05-25 03:39:32.944468 | orchestrator | ok: [testbed-manager] 2025-05-25 03:39:33.229994 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:39:33.643046 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:39:33.645608 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:39:33.645640 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:39:33.646267 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:39:33.649882 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:39:33.651021 | orchestrator | 2025-05-25 03:39:33.653431 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-25 03:39:33.655329 | orchestrator | Sunday 25 May 2025 03:39:33 +0000 (0:00:01.131) 0:00:14.725 ************ 2025-05-25 03:39:33.804213 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:39:33.885451 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:39:33.971524 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:39:34.054353 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:39:34.137195 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:39:34.274608 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:39:34.275846 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:39:34.276411 | orchestrator | 2025-05-25 03:39:34.279586 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-25 03:39:34.279626 | orchestrator | Sunday 25 May 2025 03:39:34 +0000 (0:00:00.634) 0:00:15.359 ************ 2025-05-25 03:39:36.380353 | orchestrator | ok: [testbed-manager] 2025-05-25 03:39:36.381778 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:39:36.381820 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:39:36.382918 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:39:36.382941 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:39:36.386297 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:39:36.386338 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:39:36.386602 | orchestrator | 2025-05-25 03:39:36.387675 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-25 03:39:36.388752 | orchestrator | Sunday 25 May 2025 03:39:36 +0000 (0:00:02.106) 0:00:17.466 ************ 2025-05-25 03:39:36.631960 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:39:36.714586 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:39:36.795519 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:39:36.881641 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:39:37.196355 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:39:37.196614 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:39:37.197264 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-25 03:39:37.197656 | orchestrator | 2025-05-25 03:39:37.198587 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-25 03:39:37.199030 | orchestrator | Sunday 25 May 2025 03:39:37 +0000 (0:00:00.819) 0:00:18.286 ************ 2025-05-25 03:39:38.868866 | orchestrator | ok: [testbed-manager] 2025-05-25 03:39:38.869044 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:39:38.869826 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:39:38.871296 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:39:38.873313 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:39:38.874117 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:39:38.874846 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:39:38.875654 | orchestrator | 2025-05-25 03:39:38.876536 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-25 03:39:38.877713 | orchestrator | Sunday 25 May 2025 03:39:38 +0000 (0:00:01.666) 0:00:19.952 ************ 2025-05-25 03:39:40.062357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:39:40.062468 | orchestrator | 2025-05-25 03:39:40.062549 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-25 03:39:40.064365 | orchestrator | Sunday 25 May 2025 03:39:40 +0000 (0:00:01.192) 0:00:21.145 ************ 2025-05-25 03:39:40.769116 | orchestrator | ok: [testbed-manager] 2025-05-25 03:39:41.667605 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:39:41.669158 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:39:41.669488 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:39:41.670601 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:39:41.673303 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:39:41.673527 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:39:41.674184 | orchestrator | 2025-05-25 03:39:41.674516 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-25 03:39:41.674807 | orchestrator | Sunday 25 May 2025 03:39:41 +0000 (0:00:01.602) 0:00:22.747 ************ 2025-05-25 03:39:41.826793 | orchestrator | ok: [testbed-manager] 2025-05-25 03:39:41.904616 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:39:41.987030 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:39:42.065282 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:39:42.144814 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:39:42.287325 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:39:42.288807 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:39:42.290532 | orchestrator | 2025-05-25 03:39:42.291072 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-25 03:39:42.292544 | orchestrator | Sunday 25 May 2025 03:39:42 +0000 (0:00:00.626) 0:00:23.374 ************ 2025-05-25 03:39:42.623012 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-25 03:39:42.623235 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-25 03:39:42.924566 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-25 03:39:42.925440 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-25 03:39:42.927625 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-25 03:39:42.928421 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-25 03:39:42.930832 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-25 03:39:42.931947 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-25 03:39:43.020898 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-25 03:39:43.021908 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-25 03:39:43.525578 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-25 03:39:43.525684 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-25 03:39:43.526480 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-25 03:39:43.526886 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-25 03:39:43.527367 | orchestrator | 2025-05-25 03:39:43.528968 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-25 03:39:43.529675 | orchestrator | Sunday 25 May 2025 03:39:43 +0000 (0:00:01.235) 0:00:24.609 ************ 2025-05-25 03:39:43.689046 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:39:43.769802 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:39:43.848937 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:39:43.925789 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:39:44.002405 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:39:44.125836 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:39:44.127760 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:39:44.131291 | orchestrator | 2025-05-25 03:39:44.132404 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-05-25 03:39:44.134249 | orchestrator | Sunday 25 May 2025 03:39:44 +0000 (0:00:00.604) 0:00:25.214 ************ 2025-05-25 03:39:47.573807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-4, testbed-node-5 2025-05-25 03:39:47.576503 | orchestrator | 2025-05-25 03:39:47.576543 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-05-25 03:39:47.576928 | orchestrator | Sunday 25 May 2025 03:39:47 +0000 (0:00:03.441) 0:00:28.656 ************ 2025-05-25 03:39:52.736000 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-05-25 03:39:52.736291 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-05-25 03:39:52.739752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-05-25 03:39:52.739830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-05-25 03:39:52.741408 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-05-25 03:39:52.742521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-05-25 03:39:52.743153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-05-25 03:39:52.743888 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-05-25 03:39:52.744676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-05-25 03:39:52.745394 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-05-25 03:39:52.746129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-05-25 03:39:52.747161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-05-25 03:39:52.747942 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-05-25 03:39:52.748494 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-05-25 03:39:52.749138 | orchestrator | 2025-05-25 03:39:52.749643 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-05-25 03:39:52.750221 | orchestrator | Sunday 25 May 2025 03:39:52 +0000 (0:00:05.166) 0:00:33.823 ************ 2025-05-25 03:39:57.571388 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-05-25 03:39:57.572151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-05-25 03:39:57.574147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-05-25 03:39:57.576171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-05-25 03:39:57.576941 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-05-25 03:39:57.578569 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-05-25 03:39:57.579531 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-05-25 03:39:57.579988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-05-25 03:39:57.581846 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-05-25 03:39:57.582136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-05-25 03:39:57.582989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-05-25 03:39:57.583850 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-05-25 03:39:57.584616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-05-25 03:39:57.585348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-05-25 03:39:57.585969 | orchestrator | 2025-05-25 03:39:57.586676 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-05-25 03:39:57.587373 | orchestrator | Sunday 25 May 2025 03:39:57 +0000 (0:00:04.832) 0:00:38.656 ************ 2025-05-25 03:39:58.807840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:39:58.808329 | orchestrator | 2025-05-25 03:39:58.811620 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-25 03:39:58.811672 | orchestrator | Sunday 25 May 2025 03:39:58 +0000 (0:00:01.236) 0:00:39.892 ************ 2025-05-25 03:39:59.251229 | orchestrator | ok: [testbed-manager] 2025-05-25 03:39:59.338453 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:39:59.762222 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:39:59.765720 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:39:59.765753 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:39:59.766937 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:39:59.769298 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:39:59.769623 | orchestrator | 2025-05-25 03:39:59.770840 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-25 03:39:59.770941 | orchestrator | Sunday 25 May 2025 03:39:59 +0000 (0:00:00.953) 0:00:40.845 ************ 2025-05-25 03:39:59.853719 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-25 03:39:59.856562 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-25 03:39:59.856589 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-25 03:39:59.857533 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-25 03:39:59.941669 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:39:59.942631 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-25 03:39:59.943445 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-25 03:39:59.944044 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-25 03:40:00.056472 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-25 03:40:00.057686 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-25 03:40:00.061580 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-25 03:40:00.061615 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-25 03:40:00.061627 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-25 03:40:00.333809 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:40:00.335065 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-25 03:40:00.336731 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-25 03:40:00.338119 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-25 03:40:00.339134 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-25 03:40:00.432155 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:40:00.433489 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-25 03:40:00.435049 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-25 03:40:00.436223 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-25 03:40:00.436827 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-25 03:40:00.530868 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:40:00.531651 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-25 03:40:00.532030 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-25 03:40:00.533048 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-25 03:40:00.534186 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-25 03:40:01.823992 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:40:01.824134 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:40:01.826472 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-25 03:40:01.827034 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-25 03:40:01.828449 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-25 03:40:01.829291 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-25 03:40:01.829811 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:40:01.831051 | orchestrator | 2025-05-25 03:40:01.831598 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-05-25 03:40:01.832171 | orchestrator | Sunday 25 May 2025 03:40:01 +0000 (0:00:02.061) 0:00:42.906 ************ 2025-05-25 03:40:01.996797 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:40:02.081186 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:40:02.166352 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:40:02.255895 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:40:02.336005 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:40:02.463428 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:40:02.464171 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:40:02.465566 | orchestrator | 2025-05-25 03:40:02.469188 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-25 03:40:02.469462 | orchestrator | Sunday 25 May 2025 03:40:02 +0000 (0:00:00.645) 0:00:43.551 ************ 2025-05-25 03:40:02.803438 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:40:02.890446 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:40:02.972800 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:40:03.056533 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:40:03.137740 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:40:03.176523 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:40:03.177465 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:40:03.178717 | orchestrator | 2025-05-25 03:40:03.178999 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:40:03.179269 | orchestrator | 2025-05-25 03:40:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:40:03.179788 | orchestrator | 2025-05-25 03:40:03 | INFO  | Please wait and do not abort execution. 2025-05-25 03:40:03.180533 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 03:40:03.181222 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 03:40:03.181757 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 03:40:03.182523 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 03:40:03.183411 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 03:40:03.186495 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 03:40:03.187542 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 03:40:03.188170 | orchestrator | 2025-05-25 03:40:03.188633 | orchestrator | 2025-05-25 03:40:03.189150 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:40:03.189833 | orchestrator | Sunday 25 May 2025 03:40:03 +0000 (0:00:00.711) 0:00:44.263 ************ 2025-05-25 03:40:03.190288 | orchestrator | =============================================================================== 2025-05-25 03:40:03.190748 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.17s 2025-05-25 03:40:03.191366 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.83s 2025-05-25 03:40:03.191848 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.49s 2025-05-25 03:40:03.192332 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.44s 2025-05-25 03:40:03.192765 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.11s 2025-05-25 03:40:03.193401 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.06s 2025-05-25 03:40:03.193916 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.87s 2025-05-25 03:40:03.195220 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.73s 2025-05-25 03:40:03.195619 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.72s 2025-05-25 03:40:03.195958 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.67s 2025-05-25 03:40:03.196939 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.60s 2025-05-25 03:40:03.196964 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.60s 2025-05-25 03:40:03.197481 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.24s 2025-05-25 03:40:03.197782 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.24s 2025-05-25 03:40:03.198212 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.19s 2025-05-25 03:40:03.198574 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.19s 2025-05-25 03:40:03.199009 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.13s 2025-05-25 03:40:03.199458 | orchestrator | osism.commons.network : Create required directories --------------------- 0.98s 2025-05-25 03:40:03.199803 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.95s 2025-05-25 03:40:03.200167 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.82s 2025-05-25 03:40:03.774722 | orchestrator | + osism apply wireguard 2025-05-25 03:40:05.510455 | orchestrator | 2025-05-25 03:40:05 | INFO  | Task cb860a2e-fc3f-4c3c-82cc-18704c117bae (wireguard) was prepared for execution. 2025-05-25 03:40:05.510584 | orchestrator | 2025-05-25 03:40:05 | INFO  | It takes a moment until task cb860a2e-fc3f-4c3c-82cc-18704c117bae (wireguard) has been started and output is visible here. 2025-05-25 03:40:09.535553 | orchestrator | 2025-05-25 03:40:09.535980 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-25 03:40:09.537796 | orchestrator | 2025-05-25 03:40:09.538481 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-25 03:40:09.540030 | orchestrator | Sunday 25 May 2025 03:40:09 +0000 (0:00:00.228) 0:00:00.228 ************ 2025-05-25 03:40:11.008481 | orchestrator | ok: [testbed-manager] 2025-05-25 03:40:11.009574 | orchestrator | 2025-05-25 03:40:11.010286 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-25 03:40:11.011348 | orchestrator | Sunday 25 May 2025 03:40:10 +0000 (0:00:01.474) 0:00:01.703 ************ 2025-05-25 03:40:17.166745 | orchestrator | changed: [testbed-manager] 2025-05-25 03:40:17.166957 | orchestrator | 2025-05-25 03:40:17.167560 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-25 03:40:17.168642 | orchestrator | Sunday 25 May 2025 03:40:17 +0000 (0:00:06.157) 0:00:07.860 ************ 2025-05-25 03:40:17.742529 | orchestrator | changed: [testbed-manager] 2025-05-25 03:40:17.742917 | orchestrator | 2025-05-25 03:40:17.743392 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-25 03:40:17.743709 | orchestrator | Sunday 25 May 2025 03:40:17 +0000 (0:00:00.573) 0:00:08.434 ************ 2025-05-25 03:40:18.192318 | orchestrator | changed: [testbed-manager] 2025-05-25 03:40:18.192970 | orchestrator | 2025-05-25 03:40:18.194455 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-25 03:40:18.195608 | orchestrator | Sunday 25 May 2025 03:40:18 +0000 (0:00:00.452) 0:00:08.887 ************ 2025-05-25 03:40:18.803331 | orchestrator | ok: [testbed-manager] 2025-05-25 03:40:18.804587 | orchestrator | 2025-05-25 03:40:18.805201 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-25 03:40:18.806630 | orchestrator | Sunday 25 May 2025 03:40:18 +0000 (0:00:00.612) 0:00:09.499 ************ 2025-05-25 03:40:19.213813 | orchestrator | ok: [testbed-manager] 2025-05-25 03:40:19.214695 | orchestrator | 2025-05-25 03:40:19.215606 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-25 03:40:19.216369 | orchestrator | Sunday 25 May 2025 03:40:19 +0000 (0:00:00.408) 0:00:09.907 ************ 2025-05-25 03:40:19.607173 | orchestrator | ok: [testbed-manager] 2025-05-25 03:40:19.609557 | orchestrator | 2025-05-25 03:40:19.610946 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-25 03:40:19.612339 | orchestrator | Sunday 25 May 2025 03:40:19 +0000 (0:00:00.396) 0:00:10.304 ************ 2025-05-25 03:40:20.787312 | orchestrator | changed: [testbed-manager] 2025-05-25 03:40:20.787506 | orchestrator | 2025-05-25 03:40:20.787670 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-25 03:40:20.788630 | orchestrator | Sunday 25 May 2025 03:40:20 +0000 (0:00:01.174) 0:00:11.479 ************ 2025-05-25 03:40:21.686555 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 03:40:21.686775 | orchestrator | changed: [testbed-manager] 2025-05-25 03:40:21.687461 | orchestrator | 2025-05-25 03:40:21.688771 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-25 03:40:21.689136 | orchestrator | Sunday 25 May 2025 03:40:21 +0000 (0:00:00.901) 0:00:12.380 ************ 2025-05-25 03:40:23.375854 | orchestrator | changed: [testbed-manager] 2025-05-25 03:40:23.376036 | orchestrator | 2025-05-25 03:40:23.377979 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-25 03:40:23.378009 | orchestrator | Sunday 25 May 2025 03:40:23 +0000 (0:00:01.689) 0:00:14.070 ************ 2025-05-25 03:40:24.364243 | orchestrator | changed: [testbed-manager] 2025-05-25 03:40:24.364361 | orchestrator | 2025-05-25 03:40:24.364667 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:40:24.364732 | orchestrator | 2025-05-25 03:40:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:40:24.364826 | orchestrator | 2025-05-25 03:40:24 | INFO  | Please wait and do not abort execution. 2025-05-25 03:40:24.365516 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:40:24.367062 | orchestrator | 2025-05-25 03:40:24.367443 | orchestrator | 2025-05-25 03:40:24.367972 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:40:24.369164 | orchestrator | Sunday 25 May 2025 03:40:24 +0000 (0:00:00.989) 0:00:15.060 ************ 2025-05-25 03:40:24.369199 | orchestrator | =============================================================================== 2025-05-25 03:40:24.369836 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.16s 2025-05-25 03:40:24.370231 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2025-05-25 03:40:24.370655 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.47s 2025-05-25 03:40:24.371149 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.17s 2025-05-25 03:40:24.372429 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.99s 2025-05-25 03:40:24.372925 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.90s 2025-05-25 03:40:24.373409 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.61s 2025-05-25 03:40:24.373884 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2025-05-25 03:40:24.374363 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2025-05-25 03:40:24.374962 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.41s 2025-05-25 03:40:24.375376 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2025-05-25 03:40:24.924614 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-25 03:40:24.959003 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-25 03:40:24.959066 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-25 03:40:25.056244 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 155 0 --:--:-- --:--:-- --:--:-- 156 2025-05-25 03:40:25.069674 | orchestrator | + osism apply --environment custom workarounds 2025-05-25 03:40:26.757914 | orchestrator | 2025-05-25 03:40:26 | INFO  | Trying to run play workarounds in environment custom 2025-05-25 03:40:26.816136 | orchestrator | 2025-05-25 03:40:26 | INFO  | Task fc4b9142-df30-4563-9fe1-a46a63f6d632 (workarounds) was prepared for execution. 2025-05-25 03:40:26.816235 | orchestrator | 2025-05-25 03:40:26 | INFO  | It takes a moment until task fc4b9142-df30-4563-9fe1-a46a63f6d632 (workarounds) has been started and output is visible here. 2025-05-25 03:40:30.704323 | orchestrator | 2025-05-25 03:40:30.705982 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 03:40:30.706014 | orchestrator | 2025-05-25 03:40:30.707660 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-25 03:40:30.708128 | orchestrator | Sunday 25 May 2025 03:40:30 +0000 (0:00:00.138) 0:00:00.138 ************ 2025-05-25 03:40:30.852359 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-25 03:40:30.926659 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-25 03:40:30.999590 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-25 03:40:31.074914 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-25 03:40:31.217703 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-25 03:40:31.350312 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-25 03:40:31.350818 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-25 03:40:31.352320 | orchestrator | 2025-05-25 03:40:31.353641 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-25 03:40:31.354327 | orchestrator | 2025-05-25 03:40:31.355219 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-25 03:40:31.355991 | orchestrator | Sunday 25 May 2025 03:40:31 +0000 (0:00:00.648) 0:00:00.786 ************ 2025-05-25 03:40:33.804313 | orchestrator | ok: [testbed-manager] 2025-05-25 03:40:33.804444 | orchestrator | 2025-05-25 03:40:33.804462 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-25 03:40:33.807351 | orchestrator | 2025-05-25 03:40:33.807388 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-25 03:40:33.807400 | orchestrator | Sunday 25 May 2025 03:40:33 +0000 (0:00:02.449) 0:00:03.236 ************ 2025-05-25 03:40:35.532027 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:40:35.533149 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:40:35.534120 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:40:35.537498 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:40:35.538141 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:40:35.538934 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:40:35.539624 | orchestrator | 2025-05-25 03:40:35.540333 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-25 03:40:35.540918 | orchestrator | 2025-05-25 03:40:35.541551 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-25 03:40:35.542197 | orchestrator | Sunday 25 May 2025 03:40:35 +0000 (0:00:01.727) 0:00:04.963 ************ 2025-05-25 03:40:36.986697 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-25 03:40:36.987436 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-25 03:40:36.988787 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-25 03:40:36.991292 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-25 03:40:36.992700 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-25 03:40:36.998547 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-25 03:40:36.998618 | orchestrator | 2025-05-25 03:40:36.998634 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-25 03:40:36.998647 | orchestrator | Sunday 25 May 2025 03:40:36 +0000 (0:00:01.453) 0:00:06.417 ************ 2025-05-25 03:40:40.634432 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:40:40.634541 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:40:40.636896 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:40:40.638703 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:40:40.639870 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:40:40.641036 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:40:40.642369 | orchestrator | 2025-05-25 03:40:40.642795 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-25 03:40:40.644493 | orchestrator | Sunday 25 May 2025 03:40:40 +0000 (0:00:03.650) 0:00:10.067 ************ 2025-05-25 03:40:40.787830 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:40:40.861522 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:40:40.939889 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:40:41.017177 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:40:41.330419 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:40:41.331201 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:40:41.332421 | orchestrator | 2025-05-25 03:40:41.333459 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-25 03:40:41.335239 | orchestrator | 2025-05-25 03:40:41.335265 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-25 03:40:41.336956 | orchestrator | Sunday 25 May 2025 03:40:41 +0000 (0:00:00.696) 0:00:10.764 ************ 2025-05-25 03:40:42.913796 | orchestrator | changed: [testbed-manager] 2025-05-25 03:40:42.915554 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:40:42.915656 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:40:42.916855 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:40:42.921547 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:40:42.921582 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:40:42.921594 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:40:42.921687 | orchestrator | 2025-05-25 03:40:42.922431 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-25 03:40:42.923169 | orchestrator | Sunday 25 May 2025 03:40:42 +0000 (0:00:01.583) 0:00:12.347 ************ 2025-05-25 03:40:44.509769 | orchestrator | changed: [testbed-manager] 2025-05-25 03:40:44.509876 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:40:44.510399 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:40:44.511736 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:40:44.512054 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:40:44.513192 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:40:44.513846 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:40:44.514819 | orchestrator | 2025-05-25 03:40:44.515603 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-25 03:40:44.516030 | orchestrator | Sunday 25 May 2025 03:40:44 +0000 (0:00:01.589) 0:00:13.937 ************ 2025-05-25 03:40:45.926217 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:40:45.929505 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:40:45.929548 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:40:45.929561 | orchestrator | ok: [testbed-manager] 2025-05-25 03:40:45.929574 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:40:45.930778 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:40:45.931404 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:40:45.932530 | orchestrator | 2025-05-25 03:40:45.933450 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-25 03:40:45.934342 | orchestrator | Sunday 25 May 2025 03:40:45 +0000 (0:00:01.423) 0:00:15.360 ************ 2025-05-25 03:40:47.625987 | orchestrator | changed: [testbed-manager] 2025-05-25 03:40:47.628056 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:40:47.628955 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:40:47.630650 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:40:47.630962 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:40:47.631571 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:40:47.633494 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:40:47.633521 | orchestrator | 2025-05-25 03:40:47.633536 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-25 03:40:47.635331 | orchestrator | Sunday 25 May 2025 03:40:47 +0000 (0:00:01.696) 0:00:17.056 ************ 2025-05-25 03:40:47.782509 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:40:47.863334 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:40:47.943017 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:40:48.020658 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:40:48.095785 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:40:48.227044 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:40:48.228739 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:40:48.231748 | orchestrator | 2025-05-25 03:40:48.231796 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-25 03:40:48.231971 | orchestrator | 2025-05-25 03:40:48.232587 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-25 03:40:48.233431 | orchestrator | Sunday 25 May 2025 03:40:48 +0000 (0:00:00.605) 0:00:17.662 ************ 2025-05-25 03:40:50.711366 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:40:50.712197 | orchestrator | ok: [testbed-manager] 2025-05-25 03:40:50.713460 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:40:50.713801 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:40:50.714918 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:40:50.716331 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:40:50.716760 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:40:50.717224 | orchestrator | 2025-05-25 03:40:50.717765 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:40:50.718259 | orchestrator | 2025-05-25 03:40:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:40:50.718508 | orchestrator | 2025-05-25 03:40:50 | INFO  | Please wait and do not abort execution. 2025-05-25 03:40:50.719396 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 03:40:50.719895 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:40:50.720501 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:40:50.721059 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:40:50.721564 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:40:50.722399 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:40:50.723061 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:40:50.723357 | orchestrator | 2025-05-25 03:40:50.723963 | orchestrator | 2025-05-25 03:40:50.724388 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:40:50.724868 | orchestrator | Sunday 25 May 2025 03:40:50 +0000 (0:00:02.481) 0:00:20.143 ************ 2025-05-25 03:40:50.725559 | orchestrator | =============================================================================== 2025-05-25 03:40:50.726006 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.65s 2025-05-25 03:40:50.726699 | orchestrator | Install python3-docker -------------------------------------------------- 2.48s 2025-05-25 03:40:50.726782 | orchestrator | Apply netplan configuration --------------------------------------------- 2.45s 2025-05-25 03:40:50.727293 | orchestrator | Apply netplan configuration --------------------------------------------- 1.73s 2025-05-25 03:40:50.727610 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.70s 2025-05-25 03:40:50.728130 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.59s 2025-05-25 03:40:50.728537 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.58s 2025-05-25 03:40:50.728965 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.45s 2025-05-25 03:40:50.729401 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.42s 2025-05-25 03:40:50.729748 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.70s 2025-05-25 03:40:50.730289 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.65s 2025-05-25 03:40:50.730994 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.61s 2025-05-25 03:40:51.305181 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-25 03:40:52.978406 | orchestrator | 2025-05-25 03:40:52 | INFO  | Task 600140a6-7ae9-4a13-99f6-65d4c62dfc7f (reboot) was prepared for execution. 2025-05-25 03:40:52.978517 | orchestrator | 2025-05-25 03:40:52 | INFO  | It takes a moment until task 600140a6-7ae9-4a13-99f6-65d4c62dfc7f (reboot) has been started and output is visible here. 2025-05-25 03:40:56.895697 | orchestrator | 2025-05-25 03:40:56.895812 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-25 03:40:56.896894 | orchestrator | 2025-05-25 03:40:56.898568 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-25 03:40:56.899513 | orchestrator | Sunday 25 May 2025 03:40:56 +0000 (0:00:00.179) 0:00:00.179 ************ 2025-05-25 03:40:56.995653 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:40:56.996362 | orchestrator | 2025-05-25 03:40:56.997848 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-25 03:40:56.998769 | orchestrator | Sunday 25 May 2025 03:40:56 +0000 (0:00:00.103) 0:00:00.282 ************ 2025-05-25 03:40:57.854564 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:40:57.855190 | orchestrator | 2025-05-25 03:40:57.855238 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-25 03:40:57.855910 | orchestrator | Sunday 25 May 2025 03:40:57 +0000 (0:00:00.859) 0:00:01.142 ************ 2025-05-25 03:40:57.961785 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:40:57.961874 | orchestrator | 2025-05-25 03:40:57.961890 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-25 03:40:57.961904 | orchestrator | 2025-05-25 03:40:57.961976 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-25 03:40:57.962367 | orchestrator | Sunday 25 May 2025 03:40:57 +0000 (0:00:00.103) 0:00:01.245 ************ 2025-05-25 03:40:58.044623 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:40:58.044876 | orchestrator | 2025-05-25 03:40:58.045478 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-25 03:40:58.045828 | orchestrator | Sunday 25 May 2025 03:40:58 +0000 (0:00:00.087) 0:00:01.332 ************ 2025-05-25 03:40:58.644760 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:40:58.645449 | orchestrator | 2025-05-25 03:40:58.645997 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-25 03:40:58.646648 | orchestrator | Sunday 25 May 2025 03:40:58 +0000 (0:00:00.599) 0:00:01.932 ************ 2025-05-25 03:40:58.752587 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:40:58.752861 | orchestrator | 2025-05-25 03:40:58.754256 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-25 03:40:58.755716 | orchestrator | 2025-05-25 03:40:58.756219 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-25 03:40:58.756917 | orchestrator | Sunday 25 May 2025 03:40:58 +0000 (0:00:00.106) 0:00:02.038 ************ 2025-05-25 03:40:58.904432 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:40:58.906413 | orchestrator | 2025-05-25 03:40:58.906503 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-25 03:40:58.906584 | orchestrator | Sunday 25 May 2025 03:40:58 +0000 (0:00:00.152) 0:00:02.191 ************ 2025-05-25 03:40:59.527962 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:40:59.528421 | orchestrator | 2025-05-25 03:40:59.529041 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-25 03:40:59.529891 | orchestrator | Sunday 25 May 2025 03:40:59 +0000 (0:00:00.622) 0:00:02.814 ************ 2025-05-25 03:40:59.632410 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:40:59.632873 | orchestrator | 2025-05-25 03:40:59.633510 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-25 03:40:59.633953 | orchestrator | 2025-05-25 03:40:59.634186 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-25 03:40:59.635641 | orchestrator | Sunday 25 May 2025 03:40:59 +0000 (0:00:00.104) 0:00:02.918 ************ 2025-05-25 03:40:59.717829 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:40:59.718450 | orchestrator | 2025-05-25 03:40:59.718939 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-25 03:40:59.719204 | orchestrator | Sunday 25 May 2025 03:40:59 +0000 (0:00:00.084) 0:00:03.003 ************ 2025-05-25 03:41:00.354862 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:41:00.355411 | orchestrator | 2025-05-25 03:41:00.357392 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-25 03:41:00.357719 | orchestrator | Sunday 25 May 2025 03:41:00 +0000 (0:00:00.636) 0:00:03.640 ************ 2025-05-25 03:41:00.459670 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:41:00.460515 | orchestrator | 2025-05-25 03:41:00.461159 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-25 03:41:00.461859 | orchestrator | 2025-05-25 03:41:00.463272 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-25 03:41:00.463296 | orchestrator | Sunday 25 May 2025 03:41:00 +0000 (0:00:00.103) 0:00:03.744 ************ 2025-05-25 03:41:00.565440 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:41:00.565601 | orchestrator | 2025-05-25 03:41:00.566202 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-25 03:41:00.567003 | orchestrator | Sunday 25 May 2025 03:41:00 +0000 (0:00:00.107) 0:00:03.851 ************ 2025-05-25 03:41:01.216746 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:41:01.216910 | orchestrator | 2025-05-25 03:41:01.218421 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-25 03:41:01.219233 | orchestrator | Sunday 25 May 2025 03:41:01 +0000 (0:00:00.648) 0:00:04.500 ************ 2025-05-25 03:41:01.345996 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:41:01.347077 | orchestrator | 2025-05-25 03:41:01.350332 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-25 03:41:01.350390 | orchestrator | 2025-05-25 03:41:01.350405 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-25 03:41:01.351408 | orchestrator | Sunday 25 May 2025 03:41:01 +0000 (0:00:00.129) 0:00:04.630 ************ 2025-05-25 03:41:01.445515 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:41:01.445686 | orchestrator | 2025-05-25 03:41:01.446560 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-25 03:41:01.448204 | orchestrator | Sunday 25 May 2025 03:41:01 +0000 (0:00:00.101) 0:00:04.731 ************ 2025-05-25 03:41:02.086416 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:41:02.087186 | orchestrator | 2025-05-25 03:41:02.088132 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-25 03:41:02.089095 | orchestrator | Sunday 25 May 2025 03:41:02 +0000 (0:00:00.639) 0:00:05.371 ************ 2025-05-25 03:41:02.126723 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:41:02.127016 | orchestrator | 2025-05-25 03:41:02.127765 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:41:02.127859 | orchestrator | 2025-05-25 03:41:02 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:41:02.127967 | orchestrator | 2025-05-25 03:41:02 | INFO  | Please wait and do not abort execution. 2025-05-25 03:41:02.128831 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:41:02.129879 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:41:02.130598 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:41:02.131633 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:41:02.131951 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:41:02.132485 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:41:02.132970 | orchestrator | 2025-05-25 03:41:02.133565 | orchestrator | 2025-05-25 03:41:02.134064 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:41:02.134413 | orchestrator | Sunday 25 May 2025 03:41:02 +0000 (0:00:00.041) 0:00:05.412 ************ 2025-05-25 03:41:02.134851 | orchestrator | =============================================================================== 2025-05-25 03:41:02.135297 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.01s 2025-05-25 03:41:02.135678 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.64s 2025-05-25 03:41:02.136056 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.59s 2025-05-25 03:41:02.675702 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-25 03:41:04.396410 | orchestrator | 2025-05-25 03:41:04 | INFO  | Task cc9678d9-20b3-47dd-a1ac-3c8ca0b9ac92 (wait-for-connection) was prepared for execution. 2025-05-25 03:41:04.396563 | orchestrator | 2025-05-25 03:41:04 | INFO  | It takes a moment until task cc9678d9-20b3-47dd-a1ac-3c8ca0b9ac92 (wait-for-connection) has been started and output is visible here. 2025-05-25 03:41:08.382176 | orchestrator | 2025-05-25 03:41:08.385386 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-25 03:41:08.387504 | orchestrator | 2025-05-25 03:41:08.388740 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-25 03:41:08.390230 | orchestrator | Sunday 25 May 2025 03:41:08 +0000 (0:00:00.231) 0:00:00.231 ************ 2025-05-25 03:41:20.111617 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:41:20.111736 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:41:20.111752 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:41:20.111764 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:41:20.111775 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:41:20.111786 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:41:20.112285 | orchestrator | 2025-05-25 03:41:20.113083 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:41:20.113306 | orchestrator | 2025-05-25 03:41:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:41:20.113693 | orchestrator | 2025-05-25 03:41:20 | INFO  | Please wait and do not abort execution. 2025-05-25 03:41:20.116358 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:41:20.116441 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:41:20.116455 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:41:20.116467 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:41:20.116523 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:41:20.117087 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:41:20.117212 | orchestrator | 2025-05-25 03:41:20.117720 | orchestrator | 2025-05-25 03:41:20.118348 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:41:20.119555 | orchestrator | Sunday 25 May 2025 03:41:20 +0000 (0:00:11.728) 0:00:11.959 ************ 2025-05-25 03:41:20.120344 | orchestrator | =============================================================================== 2025-05-25 03:41:20.120610 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.73s 2025-05-25 03:41:20.680813 | orchestrator | + osism apply hddtemp 2025-05-25 03:41:22.388203 | orchestrator | 2025-05-25 03:41:22 | INFO  | Task 5a7d8569-da96-4216-82a4-d3d88ff29195 (hddtemp) was prepared for execution. 2025-05-25 03:41:22.388307 | orchestrator | 2025-05-25 03:41:22 | INFO  | It takes a moment until task 5a7d8569-da96-4216-82a4-d3d88ff29195 (hddtemp) has been started and output is visible here. 2025-05-25 03:41:26.411846 | orchestrator | 2025-05-25 03:41:26.412210 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-25 03:41:26.416836 | orchestrator | 2025-05-25 03:41:26.417249 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-25 03:41:26.418777 | orchestrator | Sunday 25 May 2025 03:41:26 +0000 (0:00:00.262) 0:00:00.262 ************ 2025-05-25 03:41:26.560157 | orchestrator | ok: [testbed-manager] 2025-05-25 03:41:26.635324 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:41:26.711439 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:41:26.789584 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:41:26.964585 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:41:27.088228 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:41:27.088328 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:41:27.089060 | orchestrator | 2025-05-25 03:41:27.089666 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-25 03:41:27.092945 | orchestrator | Sunday 25 May 2025 03:41:27 +0000 (0:00:00.676) 0:00:00.939 ************ 2025-05-25 03:41:28.275931 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:41:28.277440 | orchestrator | 2025-05-25 03:41:28.277476 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-25 03:41:28.278459 | orchestrator | Sunday 25 May 2025 03:41:28 +0000 (0:00:01.177) 0:00:02.117 ************ 2025-05-25 03:41:30.204999 | orchestrator | ok: [testbed-manager] 2025-05-25 03:41:30.208251 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:41:30.209160 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:41:30.210014 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:41:30.211064 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:41:30.211784 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:41:30.214198 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:41:30.214222 | orchestrator | 2025-05-25 03:41:30.215426 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-25 03:41:30.216180 | orchestrator | Sunday 25 May 2025 03:41:30 +0000 (0:00:01.939) 0:00:04.057 ************ 2025-05-25 03:41:30.752326 | orchestrator | changed: [testbed-manager] 2025-05-25 03:41:30.835847 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:41:30.918254 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:41:31.386324 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:41:31.389186 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:41:31.389216 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:41:31.391221 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:41:31.391988 | orchestrator | 2025-05-25 03:41:31.392953 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-25 03:41:31.393587 | orchestrator | Sunday 25 May 2025 03:41:31 +0000 (0:00:01.177) 0:00:05.234 ************ 2025-05-25 03:41:32.489898 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:41:32.490983 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:41:32.492285 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:41:32.493345 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:41:32.494573 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:41:32.495572 | orchestrator | ok: [testbed-manager] 2025-05-25 03:41:32.496854 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:41:32.497652 | orchestrator | 2025-05-25 03:41:32.498529 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-25 03:41:32.499365 | orchestrator | Sunday 25 May 2025 03:41:32 +0000 (0:00:01.103) 0:00:06.338 ************ 2025-05-25 03:41:32.904786 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:41:32.989157 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:41:33.072066 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:41:33.153457 | orchestrator | changed: [testbed-manager] 2025-05-25 03:41:33.278006 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:41:33.278552 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:41:33.279950 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:41:33.280594 | orchestrator | 2025-05-25 03:41:33.281525 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-25 03:41:33.282777 | orchestrator | Sunday 25 May 2025 03:41:33 +0000 (0:00:00.793) 0:00:07.131 ************ 2025-05-25 03:41:45.159526 | orchestrator | changed: [testbed-manager] 2025-05-25 03:41:45.159633 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:41:45.161383 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:41:45.162697 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:41:45.164882 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:41:45.165289 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:41:45.166394 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:41:45.167300 | orchestrator | 2025-05-25 03:41:45.168465 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-25 03:41:45.169080 | orchestrator | Sunday 25 May 2025 03:41:45 +0000 (0:00:11.874) 0:00:19.006 ************ 2025-05-25 03:41:46.366251 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:41:46.366551 | orchestrator | 2025-05-25 03:41:46.367453 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-25 03:41:46.368569 | orchestrator | Sunday 25 May 2025 03:41:46 +0000 (0:00:01.210) 0:00:20.216 ************ 2025-05-25 03:41:48.194532 | orchestrator | changed: [testbed-manager] 2025-05-25 03:41:48.195071 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:41:48.196314 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:41:48.197291 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:41:48.198254 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:41:48.199368 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:41:48.199913 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:41:48.200834 | orchestrator | 2025-05-25 03:41:48.201580 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:41:48.202251 | orchestrator | 2025-05-25 03:41:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:41:48.202391 | orchestrator | 2025-05-25 03:41:48 | INFO  | Please wait and do not abort execution. 2025-05-25 03:41:48.203470 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:41:48.203809 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 03:41:48.204912 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 03:41:48.205476 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 03:41:48.206122 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 03:41:48.206627 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 03:41:48.207338 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 03:41:48.208445 | orchestrator | 2025-05-25 03:41:48.209864 | orchestrator | 2025-05-25 03:41:48.210445 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:41:48.211438 | orchestrator | Sunday 25 May 2025 03:41:48 +0000 (0:00:01.830) 0:00:22.047 ************ 2025-05-25 03:41:48.211868 | orchestrator | =============================================================================== 2025-05-25 03:41:48.212502 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.87s 2025-05-25 03:41:48.212862 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.94s 2025-05-25 03:41:48.213341 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.83s 2025-05-25 03:41:48.214330 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.21s 2025-05-25 03:41:48.214354 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.18s 2025-05-25 03:41:48.214899 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.18s 2025-05-25 03:41:48.215771 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.10s 2025-05-25 03:41:48.216715 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.79s 2025-05-25 03:41:48.217494 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.68s 2025-05-25 03:41:48.819427 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-25 03:41:50.348043 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-25 03:41:50.348187 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-25 03:41:50.348205 | orchestrator | + local max_attempts=60 2025-05-25 03:41:50.348218 | orchestrator | + local name=ceph-ansible 2025-05-25 03:41:50.348229 | orchestrator | + local attempt_num=1 2025-05-25 03:41:50.348240 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-25 03:41:50.389308 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-25 03:41:50.389428 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-25 03:41:50.389441 | orchestrator | + local max_attempts=60 2025-05-25 03:41:50.389449 | orchestrator | + local name=kolla-ansible 2025-05-25 03:41:50.389457 | orchestrator | + local attempt_num=1 2025-05-25 03:41:50.389524 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-25 03:41:50.421085 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-25 03:41:50.421210 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-25 03:41:50.421227 | orchestrator | + local max_attempts=60 2025-05-25 03:41:50.421240 | orchestrator | + local name=osism-ansible 2025-05-25 03:41:50.421933 | orchestrator | + local attempt_num=1 2025-05-25 03:41:50.421959 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-25 03:41:50.450531 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-25 03:41:50.450564 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-25 03:41:50.450577 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-25 03:41:50.626962 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-25 03:41:50.781302 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-25 03:41:50.950784 | orchestrator | ARA in osism-ansible already disabled. 2025-05-25 03:41:51.119414 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-25 03:41:51.119837 | orchestrator | + osism apply gather-facts 2025-05-25 03:41:52.817872 | orchestrator | 2025-05-25 03:41:52 | INFO  | Task f419c508-cb6e-430f-81a2-4775d003acdc (gather-facts) was prepared for execution. 2025-05-25 03:41:52.817971 | orchestrator | 2025-05-25 03:41:52 | INFO  | It takes a moment until task f419c508-cb6e-430f-81a2-4775d003acdc (gather-facts) has been started and output is visible here. 2025-05-25 03:41:56.750930 | orchestrator | 2025-05-25 03:41:56.751739 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-25 03:41:56.752718 | orchestrator | 2025-05-25 03:41:56.754918 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-25 03:41:56.755698 | orchestrator | Sunday 25 May 2025 03:41:56 +0000 (0:00:00.217) 0:00:00.217 ************ 2025-05-25 03:42:02.609277 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:42:02.609747 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:42:02.610929 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:42:02.611794 | orchestrator | ok: [testbed-manager] 2025-05-25 03:42:02.612372 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:42:02.613883 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:42:02.614778 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:42:02.615709 | orchestrator | 2025-05-25 03:42:02.616841 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-25 03:42:02.617806 | orchestrator | 2025-05-25 03:42:02.618820 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-25 03:42:02.619337 | orchestrator | Sunday 25 May 2025 03:42:02 +0000 (0:00:05.860) 0:00:06.078 ************ 2025-05-25 03:42:02.763529 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:42:02.842908 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:42:02.931035 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:42:03.010953 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:42:03.090840 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:42:03.126822 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:42:03.127000 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:42:03.127946 | orchestrator | 2025-05-25 03:42:03.128230 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:42:03.129084 | orchestrator | 2025-05-25 03:42:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:42:03.129199 | orchestrator | 2025-05-25 03:42:03 | INFO  | Please wait and do not abort execution. 2025-05-25 03:42:03.129303 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 03:42:03.129574 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 03:42:03.130088 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 03:42:03.130587 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 03:42:03.130991 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 03:42:03.132315 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 03:42:03.132396 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 03:42:03.134000 | orchestrator | 2025-05-25 03:42:03.134336 | orchestrator | 2025-05-25 03:42:03.134720 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:42:03.135291 | orchestrator | Sunday 25 May 2025 03:42:03 +0000 (0:00:00.518) 0:00:06.596 ************ 2025-05-25 03:42:03.135514 | orchestrator | =============================================================================== 2025-05-25 03:42:03.135884 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.86s 2025-05-25 03:42:03.136264 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-05-25 03:42:03.696252 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-25 03:42:03.714486 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-25 03:42:03.736805 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-25 03:42:03.758243 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-25 03:42:03.776906 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-25 03:42:03.794167 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-25 03:42:03.808026 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-25 03:42:03.824733 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-25 03:42:03.837539 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-25 03:42:03.850989 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-25 03:42:03.869048 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-25 03:42:03.880319 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-25 03:42:03.891939 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-25 03:42:03.902554 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-25 03:42:03.914297 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-25 03:42:03.923923 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-25 03:42:03.939712 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-25 03:42:03.959648 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-25 03:42:03.980652 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-25 03:42:04.000517 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-25 03:42:04.021236 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-25 03:42:04.429752 | orchestrator | ok: Runtime: 0:24:47.015044 2025-05-25 03:42:04.529489 | 2025-05-25 03:42:04.529621 | TASK [Deploy services] 2025-05-25 03:42:05.061922 | orchestrator | skipping: Conditional result was False 2025-05-25 03:42:05.082965 | 2025-05-25 03:42:05.083160 | TASK [Deploy in a nutshell] 2025-05-25 03:42:05.850535 | orchestrator | + set -e 2025-05-25 03:42:05.850779 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-25 03:42:05.850838 | orchestrator | ++ export INTERACTIVE=false 2025-05-25 03:42:05.850871 | orchestrator | ++ INTERACTIVE=false 2025-05-25 03:42:05.850899 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-25 03:42:05.850912 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-25 03:42:05.850941 | orchestrator | + source /opt/manager-vars.sh 2025-05-25 03:42:05.851008 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-25 03:42:05.851039 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-25 03:42:05.851054 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-25 03:42:05.851070 | orchestrator | ++ CEPH_VERSION=reef 2025-05-25 03:42:05.851082 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-25 03:42:05.851128 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-25 03:42:05.851141 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-25 03:42:05.851161 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-25 03:42:05.851173 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-25 03:42:05.851213 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-25 03:42:05.851234 | orchestrator | ++ export ARA=false 2025-05-25 03:42:05.851252 | orchestrator | ++ ARA=false 2025-05-25 03:42:05.851271 | orchestrator | ++ export TEMPEST=true 2025-05-25 03:42:05.851316 | orchestrator | ++ TEMPEST=true 2025-05-25 03:42:05.851327 | orchestrator | ++ export IS_ZUUL=true 2025-05-25 03:42:05.851347 | orchestrator | ++ IS_ZUUL=true 2025-05-25 03:42:05.851360 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.153 2025-05-25 03:42:05.851379 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.153 2025-05-25 03:42:05.851390 | orchestrator | ++ export EXTERNAL_API=false 2025-05-25 03:42:05.851401 | orchestrator | ++ EXTERNAL_API=false 2025-05-25 03:42:05.851412 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-25 03:42:05.851423 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-25 03:42:05.851434 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-25 03:42:05.851445 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-25 03:42:05.851456 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-25 03:42:05.851467 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-25 03:42:05.851482 | orchestrator | 2025-05-25 03:42:05.851494 | orchestrator | # PULL IMAGES 2025-05-25 03:42:05.851505 | orchestrator | 2025-05-25 03:42:05.851516 | orchestrator | + echo 2025-05-25 03:42:05.851527 | orchestrator | + echo '# PULL IMAGES' 2025-05-25 03:42:05.851538 | orchestrator | + echo 2025-05-25 03:42:05.853194 | orchestrator | ++ semver latest 7.0.0 2025-05-25 03:42:05.916741 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-25 03:42:05.916822 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-25 03:42:05.916836 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-25 03:42:07.572642 | orchestrator | 2025-05-25 03:42:07 | INFO  | Trying to run play pull-images in environment custom 2025-05-25 03:42:07.632965 | orchestrator | 2025-05-25 03:42:07 | INFO  | Task beb0d436-04fb-4c3d-897b-e54cf851b148 (pull-images) was prepared for execution. 2025-05-25 03:42:07.633057 | orchestrator | 2025-05-25 03:42:07 | INFO  | It takes a moment until task beb0d436-04fb-4c3d-897b-e54cf851b148 (pull-images) has been started and output is visible here. 2025-05-25 03:42:11.447577 | orchestrator | 2025-05-25 03:42:11.448330 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-25 03:42:11.448956 | orchestrator | 2025-05-25 03:42:11.449760 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-25 03:42:11.451269 | orchestrator | Sunday 25 May 2025 03:42:11 +0000 (0:00:00.122) 0:00:00.122 ************ 2025-05-25 03:43:13.684764 | orchestrator | changed: [testbed-manager] 2025-05-25 03:43:13.684890 | orchestrator | 2025-05-25 03:43:13.684911 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-25 03:43:13.684925 | orchestrator | Sunday 25 May 2025 03:43:13 +0000 (0:01:02.235) 0:01:02.357 ************ 2025-05-25 03:44:06.410907 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-25 03:44:06.411067 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-25 03:44:06.411169 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-25 03:44:06.411184 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-25 03:44:06.412593 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-25 03:44:06.414366 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-25 03:44:06.415695 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-25 03:44:06.416567 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-25 03:44:06.417545 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-25 03:44:06.417983 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-25 03:44:06.418302 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-25 03:44:06.418972 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-25 03:44:06.419815 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-25 03:44:06.420138 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-25 03:44:06.420922 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-25 03:44:06.421297 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-25 03:44:06.421938 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-25 03:44:06.422325 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-25 03:44:06.422980 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-25 03:44:06.423437 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-25 03:44:06.423919 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-25 03:44:06.424414 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-25 03:44:06.424784 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-25 03:44:06.425197 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-25 03:44:06.425917 | orchestrator | 2025-05-25 03:44:06.426225 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:44:06.426569 | orchestrator | 2025-05-25 03:44:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:44:06.426709 | orchestrator | 2025-05-25 03:44:06 | INFO  | Please wait and do not abort execution. 2025-05-25 03:44:06.427739 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:44:06.428603 | orchestrator | 2025-05-25 03:44:06.429919 | orchestrator | 2025-05-25 03:44:06.429966 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:44:06.430721 | orchestrator | Sunday 25 May 2025 03:44:06 +0000 (0:00:52.726) 0:01:55.084 ************ 2025-05-25 03:44:06.431579 | orchestrator | =============================================================================== 2025-05-25 03:44:06.432482 | orchestrator | Pull keystone image ---------------------------------------------------- 62.24s 2025-05-25 03:44:06.432979 | orchestrator | Pull other images ------------------------------------------------------ 52.73s 2025-05-25 03:44:08.705316 | orchestrator | 2025-05-25 03:44:08 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-25 03:44:08.776431 | orchestrator | 2025-05-25 03:44:08 | INFO  | Task 52ce8f94-3f1c-4f5c-9981-a17f6c495ca1 (wipe-partitions) was prepared for execution. 2025-05-25 03:44:08.776547 | orchestrator | 2025-05-25 03:44:08 | INFO  | It takes a moment until task 52ce8f94-3f1c-4f5c-9981-a17f6c495ca1 (wipe-partitions) has been started and output is visible here. 2025-05-25 03:44:12.757941 | orchestrator | 2025-05-25 03:44:12.759160 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-25 03:44:12.759193 | orchestrator | 2025-05-25 03:44:12.759469 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-25 03:44:12.760236 | orchestrator | Sunday 25 May 2025 03:44:12 +0000 (0:00:00.129) 0:00:00.129 ************ 2025-05-25 03:44:13.348317 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:44:13.348988 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:44:13.350567 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:44:13.350682 | orchestrator | 2025-05-25 03:44:13.350806 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-25 03:44:13.351139 | orchestrator | Sunday 25 May 2025 03:44:13 +0000 (0:00:00.593) 0:00:00.723 ************ 2025-05-25 03:44:13.503022 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:13.600525 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:44:13.602219 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:44:13.607192 | orchestrator | 2025-05-25 03:44:13.607219 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-25 03:44:13.608189 | orchestrator | Sunday 25 May 2025 03:44:13 +0000 (0:00:00.251) 0:00:00.974 ************ 2025-05-25 03:44:14.335008 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:44:14.338456 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:44:14.339397 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:44:14.339786 | orchestrator | 2025-05-25 03:44:14.340415 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-25 03:44:14.340954 | orchestrator | Sunday 25 May 2025 03:44:14 +0000 (0:00:00.732) 0:00:01.707 ************ 2025-05-25 03:44:14.500577 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:14.596829 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:44:14.596981 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:44:14.596995 | orchestrator | 2025-05-25 03:44:14.597806 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-25 03:44:14.598122 | orchestrator | Sunday 25 May 2025 03:44:14 +0000 (0:00:00.264) 0:00:01.972 ************ 2025-05-25 03:44:15.726642 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-25 03:44:15.726725 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-25 03:44:15.729852 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-25 03:44:15.735352 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-25 03:44:15.735406 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-25 03:44:15.735414 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-25 03:44:15.735420 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-25 03:44:15.735426 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-25 03:44:15.735431 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-25 03:44:15.735436 | orchestrator | 2025-05-25 03:44:15.735443 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-25 03:44:15.735449 | orchestrator | Sunday 25 May 2025 03:44:15 +0000 (0:00:01.129) 0:00:03.101 ************ 2025-05-25 03:44:16.968862 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-25 03:44:16.969141 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-25 03:44:16.969694 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-25 03:44:16.971447 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-25 03:44:16.971479 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-25 03:44:16.971861 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-25 03:44:16.972426 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-25 03:44:16.975112 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-25 03:44:16.975505 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-25 03:44:16.975534 | orchestrator | 2025-05-25 03:44:16.975721 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-25 03:44:16.975992 | orchestrator | Sunday 25 May 2025 03:44:16 +0000 (0:00:01.239) 0:00:04.340 ************ 2025-05-25 03:44:19.018380 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-25 03:44:19.018476 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-25 03:44:19.018948 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-25 03:44:19.019435 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-25 03:44:19.019902 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-25 03:44:19.020328 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-25 03:44:19.020861 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-25 03:44:19.021305 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-25 03:44:19.021694 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-25 03:44:19.022237 | orchestrator | 2025-05-25 03:44:19.024701 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-25 03:44:19.024710 | orchestrator | Sunday 25 May 2025 03:44:19 +0000 (0:00:02.052) 0:00:06.393 ************ 2025-05-25 03:44:19.582401 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:44:19.582587 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:44:19.583144 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:44:19.583461 | orchestrator | 2025-05-25 03:44:19.583969 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-25 03:44:19.584407 | orchestrator | Sunday 25 May 2025 03:44:19 +0000 (0:00:00.563) 0:00:06.957 ************ 2025-05-25 03:44:20.113228 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:44:20.113769 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:44:20.116953 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:44:20.117310 | orchestrator | 2025-05-25 03:44:20.117651 | orchestrator | 2025-05-25 03:44:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:44:20.117675 | orchestrator | 2025-05-25 03:44:20 | INFO  | Please wait and do not abort execution. 2025-05-25 03:44:20.117754 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:44:20.117969 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:44:20.118197 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:44:20.118488 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:44:20.118798 | orchestrator | 2025-05-25 03:44:20.119045 | orchestrator | 2025-05-25 03:44:20.119405 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:44:20.119866 | orchestrator | Sunday 25 May 2025 03:44:20 +0000 (0:00:00.531) 0:00:07.488 ************ 2025-05-25 03:44:20.120179 | orchestrator | =============================================================================== 2025-05-25 03:44:20.120766 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.05s 2025-05-25 03:44:20.121661 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.24s 2025-05-25 03:44:20.121939 | orchestrator | Check device availability ----------------------------------------------- 1.13s 2025-05-25 03:44:20.122558 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.73s 2025-05-25 03:44:20.123186 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2025-05-25 03:44:20.123729 | orchestrator | Reload udev rules ------------------------------------------------------- 0.56s 2025-05-25 03:44:20.126929 | orchestrator | Request device events from the kernel ----------------------------------- 0.53s 2025-05-25 03:44:20.127119 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2025-05-25 03:44:20.127902 | orchestrator | Remove all rook related logical devices --------------------------------- 0.25s 2025-05-25 03:44:21.880916 | orchestrator | 2025-05-25 03:44:21 | INFO  | Task 3fb0f42f-f79e-4005-ae99-7a9f72760a1d (facts) was prepared for execution. 2025-05-25 03:44:21.881014 | orchestrator | 2025-05-25 03:44:21 | INFO  | It takes a moment until task 3fb0f42f-f79e-4005-ae99-7a9f72760a1d (facts) has been started and output is visible here. 2025-05-25 03:44:25.560287 | orchestrator | 2025-05-25 03:44:25.562402 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-25 03:44:25.564330 | orchestrator | 2025-05-25 03:44:25.571892 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-25 03:44:25.573400 | orchestrator | Sunday 25 May 2025 03:44:25 +0000 (0:00:00.198) 0:00:00.198 ************ 2025-05-25 03:44:26.461518 | orchestrator | ok: [testbed-manager] 2025-05-25 03:44:26.467063 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:44:26.470495 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:44:26.471934 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:44:26.473282 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:44:26.474510 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:44:26.476393 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:44:26.477261 | orchestrator | 2025-05-25 03:44:26.477767 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-25 03:44:26.478731 | orchestrator | Sunday 25 May 2025 03:44:26 +0000 (0:00:00.901) 0:00:01.099 ************ 2025-05-25 03:44:26.645140 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:44:26.713749 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:44:26.785995 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:44:26.850238 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:44:26.919316 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:27.544899 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:44:27.546220 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:44:27.547302 | orchestrator | 2025-05-25 03:44:27.549346 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-25 03:44:27.550640 | orchestrator | 2025-05-25 03:44:27.552204 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-25 03:44:27.552956 | orchestrator | Sunday 25 May 2025 03:44:27 +0000 (0:00:01.082) 0:00:02.182 ************ 2025-05-25 03:44:32.233828 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:44:32.233933 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:44:32.240107 | orchestrator | ok: [testbed-manager] 2025-05-25 03:44:32.240629 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:44:32.241240 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:44:32.242257 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:44:32.243176 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:44:32.244315 | orchestrator | 2025-05-25 03:44:32.244950 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-25 03:44:32.246188 | orchestrator | 2025-05-25 03:44:32.248183 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-25 03:44:32.249045 | orchestrator | Sunday 25 May 2025 03:44:32 +0000 (0:00:04.690) 0:00:06.873 ************ 2025-05-25 03:44:32.379941 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:44:32.455865 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:44:32.529763 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:44:32.606388 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:44:32.680119 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:32.727161 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:44:32.727399 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:44:32.728298 | orchestrator | 2025-05-25 03:44:32.728787 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:44:32.729358 | orchestrator | 2025-05-25 03:44:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:44:32.729857 | orchestrator | 2025-05-25 03:44:32 | INFO  | Please wait and do not abort execution. 2025-05-25 03:44:32.730601 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:44:32.731065 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:44:32.731741 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:44:32.732297 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:44:32.732798 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:44:32.733314 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:44:32.733927 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:44:32.734432 | orchestrator | 2025-05-25 03:44:32.734893 | orchestrator | 2025-05-25 03:44:32.735284 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:44:32.735856 | orchestrator | Sunday 25 May 2025 03:44:32 +0000 (0:00:00.495) 0:00:07.369 ************ 2025-05-25 03:44:32.736409 | orchestrator | =============================================================================== 2025-05-25 03:44:32.736877 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.69s 2025-05-25 03:44:32.737464 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.08s 2025-05-25 03:44:32.738176 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.90s 2025-05-25 03:44:32.738442 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-05-25 03:44:35.146699 | orchestrator | 2025-05-25 03:44:35 | INFO  | Task 57137bc9-4656-4835-91d7-271c04385616 (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-25 03:44:35.146800 | orchestrator | 2025-05-25 03:44:35 | INFO  | It takes a moment until task 57137bc9-4656-4835-91d7-271c04385616 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-25 03:44:39.566166 | orchestrator | 2025-05-25 03:44:39.566548 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-25 03:44:39.566583 | orchestrator | 2025-05-25 03:44:39.568945 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-25 03:44:39.569238 | orchestrator | Sunday 25 May 2025 03:44:39 +0000 (0:00:00.341) 0:00:00.341 ************ 2025-05-25 03:44:39.831192 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-25 03:44:39.831302 | orchestrator | 2025-05-25 03:44:39.831319 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-25 03:44:39.831332 | orchestrator | Sunday 25 May 2025 03:44:39 +0000 (0:00:00.266) 0:00:00.607 ************ 2025-05-25 03:44:40.056021 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:44:40.056156 | orchestrator | 2025-05-25 03:44:40.057497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:40.058471 | orchestrator | Sunday 25 May 2025 03:44:40 +0000 (0:00:00.223) 0:00:00.831 ************ 2025-05-25 03:44:40.526993 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-25 03:44:40.528768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-25 03:44:40.530337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-25 03:44:40.531886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-25 03:44:40.531978 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-25 03:44:40.533912 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-25 03:44:40.534713 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-25 03:44:40.535906 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-25 03:44:40.536746 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-25 03:44:40.537702 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-25 03:44:40.538632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-25 03:44:40.540061 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-25 03:44:40.540495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-25 03:44:40.541604 | orchestrator | 2025-05-25 03:44:40.542334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:40.543300 | orchestrator | Sunday 25 May 2025 03:44:40 +0000 (0:00:00.471) 0:00:01.302 ************ 2025-05-25 03:44:41.026478 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:41.026579 | orchestrator | 2025-05-25 03:44:41.028382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:41.029777 | orchestrator | Sunday 25 May 2025 03:44:41 +0000 (0:00:00.493) 0:00:01.796 ************ 2025-05-25 03:44:41.219288 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:41.219981 | orchestrator | 2025-05-25 03:44:41.220513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:41.221860 | orchestrator | Sunday 25 May 2025 03:44:41 +0000 (0:00:00.201) 0:00:01.997 ************ 2025-05-25 03:44:41.412672 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:41.414165 | orchestrator | 2025-05-25 03:44:41.414451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:41.415068 | orchestrator | Sunday 25 May 2025 03:44:41 +0000 (0:00:00.195) 0:00:02.193 ************ 2025-05-25 03:44:41.613229 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:41.614159 | orchestrator | 2025-05-25 03:44:41.616337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:41.618173 | orchestrator | Sunday 25 May 2025 03:44:41 +0000 (0:00:00.198) 0:00:02.392 ************ 2025-05-25 03:44:41.815058 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:41.816379 | orchestrator | 2025-05-25 03:44:41.818995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:41.819043 | orchestrator | Sunday 25 May 2025 03:44:41 +0000 (0:00:00.202) 0:00:02.595 ************ 2025-05-25 03:44:41.996266 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:41.996370 | orchestrator | 2025-05-25 03:44:41.998171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:41.999992 | orchestrator | Sunday 25 May 2025 03:44:41 +0000 (0:00:00.178) 0:00:02.773 ************ 2025-05-25 03:44:42.200760 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:42.202144 | orchestrator | 2025-05-25 03:44:42.203239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:42.204329 | orchestrator | Sunday 25 May 2025 03:44:42 +0000 (0:00:00.206) 0:00:02.980 ************ 2025-05-25 03:44:42.411617 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:42.414729 | orchestrator | 2025-05-25 03:44:42.415668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:42.415853 | orchestrator | Sunday 25 May 2025 03:44:42 +0000 (0:00:00.208) 0:00:03.189 ************ 2025-05-25 03:44:42.915681 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7) 2025-05-25 03:44:42.919362 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7) 2025-05-25 03:44:42.919497 | orchestrator | 2025-05-25 03:44:42.919920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:42.920299 | orchestrator | Sunday 25 May 2025 03:44:42 +0000 (0:00:00.506) 0:00:03.695 ************ 2025-05-25 03:44:43.336462 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cdfa8505-de86-48ff-8ed6-b6e1381a94b2) 2025-05-25 03:44:43.336623 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cdfa8505-de86-48ff-8ed6-b6e1381a94b2) 2025-05-25 03:44:43.337017 | orchestrator | 2025-05-25 03:44:43.337201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:43.337625 | orchestrator | Sunday 25 May 2025 03:44:43 +0000 (0:00:00.420) 0:00:04.115 ************ 2025-05-25 03:44:43.972707 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4276f8fa-1a41-4d3c-8190-a1d2d3b80049) 2025-05-25 03:44:43.977365 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4276f8fa-1a41-4d3c-8190-a1d2d3b80049) 2025-05-25 03:44:43.977858 | orchestrator | 2025-05-25 03:44:43.978591 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:43.979373 | orchestrator | Sunday 25 May 2025 03:44:43 +0000 (0:00:00.634) 0:00:04.750 ************ 2025-05-25 03:44:44.791921 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dac67b12-4a3b-49b0-a18f-dd9740769fda) 2025-05-25 03:44:44.793862 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dac67b12-4a3b-49b0-a18f-dd9740769fda) 2025-05-25 03:44:44.793891 | orchestrator | 2025-05-25 03:44:44.793948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:44.793957 | orchestrator | Sunday 25 May 2025 03:44:44 +0000 (0:00:00.820) 0:00:05.571 ************ 2025-05-25 03:44:45.596101 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-25 03:44:45.596460 | orchestrator | 2025-05-25 03:44:45.597928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:44:45.598241 | orchestrator | Sunday 25 May 2025 03:44:45 +0000 (0:00:00.799) 0:00:06.370 ************ 2025-05-25 03:44:46.001754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-25 03:44:46.004390 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-25 03:44:46.008306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-25 03:44:46.012239 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-25 03:44:46.015451 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-25 03:44:46.018218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-25 03:44:46.019583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-25 03:44:46.022678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-25 03:44:46.025050 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-25 03:44:46.026904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-25 03:44:46.030143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-25 03:44:46.032466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-25 03:44:46.033633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-25 03:44:46.034601 | orchestrator | 2025-05-25 03:44:46.035199 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:44:46.038013 | orchestrator | Sunday 25 May 2025 03:44:45 +0000 (0:00:00.405) 0:00:06.776 ************ 2025-05-25 03:44:46.209378 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:46.210123 | orchestrator | 2025-05-25 03:44:46.211229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:44:46.211254 | orchestrator | Sunday 25 May 2025 03:44:46 +0000 (0:00:00.212) 0:00:06.988 ************ 2025-05-25 03:44:46.424738 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:46.425753 | orchestrator | 2025-05-25 03:44:46.427866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:44:46.428799 | orchestrator | Sunday 25 May 2025 03:44:46 +0000 (0:00:00.213) 0:00:07.201 ************ 2025-05-25 03:44:46.633463 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:46.633678 | orchestrator | 2025-05-25 03:44:46.634756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:44:46.635173 | orchestrator | Sunday 25 May 2025 03:44:46 +0000 (0:00:00.211) 0:00:07.413 ************ 2025-05-25 03:44:46.832555 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:46.832706 | orchestrator | 2025-05-25 03:44:46.832723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:44:46.832736 | orchestrator | Sunday 25 May 2025 03:44:46 +0000 (0:00:00.195) 0:00:07.609 ************ 2025-05-25 03:44:47.032163 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:47.034975 | orchestrator | 2025-05-25 03:44:47.035781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:44:47.036730 | orchestrator | Sunday 25 May 2025 03:44:47 +0000 (0:00:00.200) 0:00:07.809 ************ 2025-05-25 03:44:47.236711 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:47.238290 | orchestrator | 2025-05-25 03:44:47.238671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:44:47.238988 | orchestrator | Sunday 25 May 2025 03:44:47 +0000 (0:00:00.207) 0:00:08.016 ************ 2025-05-25 03:44:47.453800 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:47.455162 | orchestrator | 2025-05-25 03:44:47.456918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:44:47.456950 | orchestrator | Sunday 25 May 2025 03:44:47 +0000 (0:00:00.214) 0:00:08.231 ************ 2025-05-25 03:44:47.651219 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:47.652236 | orchestrator | 2025-05-25 03:44:47.653775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:44:47.658141 | orchestrator | Sunday 25 May 2025 03:44:47 +0000 (0:00:00.199) 0:00:08.430 ************ 2025-05-25 03:44:48.737489 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-25 03:44:48.738762 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-25 03:44:48.739972 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-25 03:44:48.744192 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-25 03:44:48.744218 | orchestrator | 2025-05-25 03:44:48.744661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:44:48.745047 | orchestrator | Sunday 25 May 2025 03:44:48 +0000 (0:00:01.084) 0:00:09.514 ************ 2025-05-25 03:44:48.943600 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:48.943856 | orchestrator | 2025-05-25 03:44:48.945019 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:44:48.945569 | orchestrator | Sunday 25 May 2025 03:44:48 +0000 (0:00:00.208) 0:00:09.722 ************ 2025-05-25 03:44:49.154441 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:49.154574 | orchestrator | 2025-05-25 03:44:49.155693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:44:49.155713 | orchestrator | Sunday 25 May 2025 03:44:49 +0000 (0:00:00.210) 0:00:09.933 ************ 2025-05-25 03:44:49.377381 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:49.377477 | orchestrator | 2025-05-25 03:44:49.379196 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:44:49.382164 | orchestrator | Sunday 25 May 2025 03:44:49 +0000 (0:00:00.220) 0:00:10.154 ************ 2025-05-25 03:44:49.597344 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:49.597954 | orchestrator | 2025-05-25 03:44:49.598980 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-25 03:44:49.599610 | orchestrator | Sunday 25 May 2025 03:44:49 +0000 (0:00:00.222) 0:00:10.377 ************ 2025-05-25 03:44:49.797273 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-25 03:44:49.799007 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-25 03:44:49.799481 | orchestrator | 2025-05-25 03:44:49.800341 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-25 03:44:49.802654 | orchestrator | Sunday 25 May 2025 03:44:49 +0000 (0:00:00.196) 0:00:10.573 ************ 2025-05-25 03:44:49.950476 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:49.950659 | orchestrator | 2025-05-25 03:44:49.951201 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-25 03:44:49.952808 | orchestrator | Sunday 25 May 2025 03:44:49 +0000 (0:00:00.155) 0:00:10.729 ************ 2025-05-25 03:44:50.138383 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:50.140396 | orchestrator | 2025-05-25 03:44:50.141055 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-25 03:44:50.145850 | orchestrator | Sunday 25 May 2025 03:44:50 +0000 (0:00:00.183) 0:00:10.913 ************ 2025-05-25 03:44:50.273689 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:50.275666 | orchestrator | 2025-05-25 03:44:50.276668 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-25 03:44:50.277543 | orchestrator | Sunday 25 May 2025 03:44:50 +0000 (0:00:00.141) 0:00:11.054 ************ 2025-05-25 03:44:50.490622 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:44:50.491361 | orchestrator | 2025-05-25 03:44:50.491917 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-25 03:44:50.492481 | orchestrator | Sunday 25 May 2025 03:44:50 +0000 (0:00:00.215) 0:00:11.269 ************ 2025-05-25 03:44:50.673553 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '02f362e7-7983-50b5-b688-a41104a01860'}}) 2025-05-25 03:44:50.673742 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b24cffad-8a1f-50fd-b816-ada28c3c4ac7'}}) 2025-05-25 03:44:50.678297 | orchestrator | 2025-05-25 03:44:50.678463 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-25 03:44:50.684292 | orchestrator | Sunday 25 May 2025 03:44:50 +0000 (0:00:00.178) 0:00:11.448 ************ 2025-05-25 03:44:50.935562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '02f362e7-7983-50b5-b688-a41104a01860'}})  2025-05-25 03:44:50.935659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b24cffad-8a1f-50fd-b816-ada28c3c4ac7'}})  2025-05-25 03:44:50.938734 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:50.940171 | orchestrator | 2025-05-25 03:44:50.943161 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-25 03:44:50.944206 | orchestrator | Sunday 25 May 2025 03:44:50 +0000 (0:00:00.265) 0:00:11.714 ************ 2025-05-25 03:44:51.395344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '02f362e7-7983-50b5-b688-a41104a01860'}})  2025-05-25 03:44:51.395447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b24cffad-8a1f-50fd-b816-ada28c3c4ac7'}})  2025-05-25 03:44:51.395463 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:51.396916 | orchestrator | 2025-05-25 03:44:51.397255 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-25 03:44:51.397926 | orchestrator | Sunday 25 May 2025 03:44:51 +0000 (0:00:00.456) 0:00:12.170 ************ 2025-05-25 03:44:51.559580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '02f362e7-7983-50b5-b688-a41104a01860'}})  2025-05-25 03:44:51.559728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b24cffad-8a1f-50fd-b816-ada28c3c4ac7'}})  2025-05-25 03:44:51.560580 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:51.560607 | orchestrator | 2025-05-25 03:44:51.560621 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-25 03:44:51.561708 | orchestrator | Sunday 25 May 2025 03:44:51 +0000 (0:00:00.167) 0:00:12.337 ************ 2025-05-25 03:44:51.700311 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:44:51.702139 | orchestrator | 2025-05-25 03:44:51.702450 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-25 03:44:51.705173 | orchestrator | Sunday 25 May 2025 03:44:51 +0000 (0:00:00.141) 0:00:12.479 ************ 2025-05-25 03:44:51.956265 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:44:51.956377 | orchestrator | 2025-05-25 03:44:51.957783 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-25 03:44:51.958171 | orchestrator | Sunday 25 May 2025 03:44:51 +0000 (0:00:00.255) 0:00:12.735 ************ 2025-05-25 03:44:52.099362 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:52.099744 | orchestrator | 2025-05-25 03:44:52.101898 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-25 03:44:52.103112 | orchestrator | Sunday 25 May 2025 03:44:52 +0000 (0:00:00.143) 0:00:12.878 ************ 2025-05-25 03:44:52.239391 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:52.240937 | orchestrator | 2025-05-25 03:44:52.242786 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-25 03:44:52.244628 | orchestrator | Sunday 25 May 2025 03:44:52 +0000 (0:00:00.139) 0:00:13.018 ************ 2025-05-25 03:44:52.371791 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:52.375522 | orchestrator | 2025-05-25 03:44:52.375565 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-25 03:44:52.376549 | orchestrator | Sunday 25 May 2025 03:44:52 +0000 (0:00:00.127) 0:00:13.146 ************ 2025-05-25 03:44:52.527492 | orchestrator | ok: [testbed-node-3] => { 2025-05-25 03:44:52.527612 | orchestrator |  "ceph_osd_devices": { 2025-05-25 03:44:52.529574 | orchestrator |  "sdb": { 2025-05-25 03:44:52.532784 | orchestrator |  "osd_lvm_uuid": "02f362e7-7983-50b5-b688-a41104a01860" 2025-05-25 03:44:52.532826 | orchestrator |  }, 2025-05-25 03:44:52.532838 | orchestrator |  "sdc": { 2025-05-25 03:44:52.533449 | orchestrator |  "osd_lvm_uuid": "b24cffad-8a1f-50fd-b816-ada28c3c4ac7" 2025-05-25 03:44:52.535170 | orchestrator |  } 2025-05-25 03:44:52.535798 | orchestrator |  } 2025-05-25 03:44:52.537769 | orchestrator | } 2025-05-25 03:44:52.539130 | orchestrator | 2025-05-25 03:44:52.539881 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-25 03:44:52.540952 | orchestrator | Sunday 25 May 2025 03:44:52 +0000 (0:00:00.159) 0:00:13.306 ************ 2025-05-25 03:44:52.679157 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:52.679246 | orchestrator | 2025-05-25 03:44:52.679682 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-25 03:44:52.681684 | orchestrator | Sunday 25 May 2025 03:44:52 +0000 (0:00:00.150) 0:00:13.456 ************ 2025-05-25 03:44:52.869425 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:52.874929 | orchestrator | 2025-05-25 03:44:52.874966 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-25 03:44:52.877008 | orchestrator | Sunday 25 May 2025 03:44:52 +0000 (0:00:00.191) 0:00:13.648 ************ 2025-05-25 03:44:52.998741 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:44:53.000309 | orchestrator | 2025-05-25 03:44:53.001599 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-25 03:44:53.002993 | orchestrator | Sunday 25 May 2025 03:44:52 +0000 (0:00:00.130) 0:00:13.778 ************ 2025-05-25 03:44:53.188686 | orchestrator | changed: [testbed-node-3] => { 2025-05-25 03:44:53.188869 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-25 03:44:53.190701 | orchestrator |  "ceph_osd_devices": { 2025-05-25 03:44:53.191582 | orchestrator |  "sdb": { 2025-05-25 03:44:53.192735 | orchestrator |  "osd_lvm_uuid": "02f362e7-7983-50b5-b688-a41104a01860" 2025-05-25 03:44:53.193522 | orchestrator |  }, 2025-05-25 03:44:53.194936 | orchestrator |  "sdc": { 2025-05-25 03:44:53.195913 | orchestrator |  "osd_lvm_uuid": "b24cffad-8a1f-50fd-b816-ada28c3c4ac7" 2025-05-25 03:44:53.197690 | orchestrator |  } 2025-05-25 03:44:53.198915 | orchestrator |  }, 2025-05-25 03:44:53.200185 | orchestrator |  "lvm_volumes": [ 2025-05-25 03:44:53.200838 | orchestrator |  { 2025-05-25 03:44:53.201610 | orchestrator |  "data": "osd-block-02f362e7-7983-50b5-b688-a41104a01860", 2025-05-25 03:44:53.202352 | orchestrator |  "data_vg": "ceph-02f362e7-7983-50b5-b688-a41104a01860" 2025-05-25 03:44:53.202909 | orchestrator |  }, 2025-05-25 03:44:53.203965 | orchestrator |  { 2025-05-25 03:44:53.204541 | orchestrator |  "data": "osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7", 2025-05-25 03:44:53.206828 | orchestrator |  "data_vg": "ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7" 2025-05-25 03:44:53.207164 | orchestrator |  } 2025-05-25 03:44:53.207861 | orchestrator |  ] 2025-05-25 03:44:53.208421 | orchestrator |  } 2025-05-25 03:44:53.208963 | orchestrator | } 2025-05-25 03:44:53.209349 | orchestrator | 2025-05-25 03:44:53.209872 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-25 03:44:53.211301 | orchestrator | Sunday 25 May 2025 03:44:53 +0000 (0:00:00.188) 0:00:13.967 ************ 2025-05-25 03:44:55.401579 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-25 03:44:55.401746 | orchestrator | 2025-05-25 03:44:55.402177 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-25 03:44:55.404121 | orchestrator | 2025-05-25 03:44:55.404220 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-25 03:44:55.406174 | orchestrator | Sunday 25 May 2025 03:44:55 +0000 (0:00:02.215) 0:00:16.182 ************ 2025-05-25 03:44:55.649356 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-25 03:44:55.651165 | orchestrator | 2025-05-25 03:44:55.651435 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-25 03:44:55.651938 | orchestrator | Sunday 25 May 2025 03:44:55 +0000 (0:00:00.247) 0:00:16.430 ************ 2025-05-25 03:44:55.879684 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:44:55.883257 | orchestrator | 2025-05-25 03:44:55.883347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:55.883366 | orchestrator | Sunday 25 May 2025 03:44:55 +0000 (0:00:00.227) 0:00:16.657 ************ 2025-05-25 03:44:56.248800 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-25 03:44:56.250339 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-25 03:44:56.254438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-25 03:44:56.255726 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-25 03:44:56.256273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-25 03:44:56.257422 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-25 03:44:56.258662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-25 03:44:56.259742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-25 03:44:56.262607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-25 03:44:56.262633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-25 03:44:56.262645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-25 03:44:56.262657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-25 03:44:56.262669 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-25 03:44:56.262965 | orchestrator | 2025-05-25 03:44:56.266970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:56.266994 | orchestrator | Sunday 25 May 2025 03:44:56 +0000 (0:00:00.370) 0:00:17.027 ************ 2025-05-25 03:44:56.456570 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:44:56.456657 | orchestrator | 2025-05-25 03:44:56.456671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:56.456685 | orchestrator | Sunday 25 May 2025 03:44:56 +0000 (0:00:00.205) 0:00:17.232 ************ 2025-05-25 03:44:56.667467 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:44:56.668582 | orchestrator | 2025-05-25 03:44:56.669774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:56.670591 | orchestrator | Sunday 25 May 2025 03:44:56 +0000 (0:00:00.213) 0:00:17.445 ************ 2025-05-25 03:44:56.872370 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:44:56.872630 | orchestrator | 2025-05-25 03:44:56.873724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:56.875401 | orchestrator | Sunday 25 May 2025 03:44:56 +0000 (0:00:00.204) 0:00:17.650 ************ 2025-05-25 03:44:57.069746 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:44:57.069871 | orchestrator | 2025-05-25 03:44:57.070007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:57.071232 | orchestrator | Sunday 25 May 2025 03:44:57 +0000 (0:00:00.197) 0:00:17.848 ************ 2025-05-25 03:44:57.712717 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:44:57.713958 | orchestrator | 2025-05-25 03:44:57.714495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:57.716031 | orchestrator | Sunday 25 May 2025 03:44:57 +0000 (0:00:00.644) 0:00:18.492 ************ 2025-05-25 03:44:57.938703 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:44:57.939227 | orchestrator | 2025-05-25 03:44:57.940319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:57.941186 | orchestrator | Sunday 25 May 2025 03:44:57 +0000 (0:00:00.226) 0:00:18.719 ************ 2025-05-25 03:44:58.144200 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:44:58.144351 | orchestrator | 2025-05-25 03:44:58.144636 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:58.146331 | orchestrator | Sunday 25 May 2025 03:44:58 +0000 (0:00:00.204) 0:00:18.923 ************ 2025-05-25 03:44:58.351211 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:44:58.351616 | orchestrator | 2025-05-25 03:44:58.352853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:58.353818 | orchestrator | Sunday 25 May 2025 03:44:58 +0000 (0:00:00.206) 0:00:19.130 ************ 2025-05-25 03:44:58.833985 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83) 2025-05-25 03:44:58.837883 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83) 2025-05-25 03:44:58.837933 | orchestrator | 2025-05-25 03:44:58.837947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:58.842913 | orchestrator | Sunday 25 May 2025 03:44:58 +0000 (0:00:00.484) 0:00:19.614 ************ 2025-05-25 03:44:59.257475 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_17d1c6f1-1305-4025-b6c8-ee1be555c001) 2025-05-25 03:44:59.260031 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_17d1c6f1-1305-4025-b6c8-ee1be555c001) 2025-05-25 03:44:59.260095 | orchestrator | 2025-05-25 03:44:59.261745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:59.262990 | orchestrator | Sunday 25 May 2025 03:44:59 +0000 (0:00:00.422) 0:00:20.037 ************ 2025-05-25 03:44:59.676830 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b0e50223-c4d0-48f7-a5f8-d1963b067c82) 2025-05-25 03:44:59.676930 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b0e50223-c4d0-48f7-a5f8-d1963b067c82) 2025-05-25 03:44:59.676944 | orchestrator | 2025-05-25 03:44:59.677235 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:44:59.677669 | orchestrator | Sunday 25 May 2025 03:44:59 +0000 (0:00:00.417) 0:00:20.455 ************ 2025-05-25 03:45:00.140247 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_38e86a76-d592-4447-9c79-2151d2192c3f) 2025-05-25 03:45:00.140460 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_38e86a76-d592-4447-9c79-2151d2192c3f) 2025-05-25 03:45:00.142373 | orchestrator | 2025-05-25 03:45:00.142612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:45:00.145063 | orchestrator | Sunday 25 May 2025 03:45:00 +0000 (0:00:00.462) 0:00:20.917 ************ 2025-05-25 03:45:00.511629 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-25 03:45:00.512011 | orchestrator | 2025-05-25 03:45:00.512871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:00.513506 | orchestrator | Sunday 25 May 2025 03:45:00 +0000 (0:00:00.374) 0:00:21.291 ************ 2025-05-25 03:45:00.875819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-25 03:45:00.878797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-25 03:45:00.882551 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-25 03:45:00.883143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-25 03:45:00.883738 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-25 03:45:00.884568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-25 03:45:00.885188 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-25 03:45:00.885830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-25 03:45:00.886486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-25 03:45:00.887334 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-25 03:45:00.887688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-25 03:45:00.888182 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-25 03:45:00.888695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-25 03:45:00.889431 | orchestrator | 2025-05-25 03:45:00.890694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:00.891130 | orchestrator | Sunday 25 May 2025 03:45:00 +0000 (0:00:00.363) 0:00:21.654 ************ 2025-05-25 03:45:01.082248 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:01.083205 | orchestrator | 2025-05-25 03:45:01.083770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:01.084710 | orchestrator | Sunday 25 May 2025 03:45:01 +0000 (0:00:00.207) 0:00:21.862 ************ 2025-05-25 03:45:01.709262 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:01.709445 | orchestrator | 2025-05-25 03:45:01.710344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:01.711227 | orchestrator | Sunday 25 May 2025 03:45:01 +0000 (0:00:00.624) 0:00:22.486 ************ 2025-05-25 03:45:01.907826 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:01.909122 | orchestrator | 2025-05-25 03:45:01.911226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:01.911356 | orchestrator | Sunday 25 May 2025 03:45:01 +0000 (0:00:00.197) 0:00:22.684 ************ 2025-05-25 03:45:02.091621 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:02.094527 | orchestrator | 2025-05-25 03:45:02.097153 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:02.098845 | orchestrator | Sunday 25 May 2025 03:45:02 +0000 (0:00:00.185) 0:00:22.870 ************ 2025-05-25 03:45:02.309767 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:02.314266 | orchestrator | 2025-05-25 03:45:02.315224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:02.315897 | orchestrator | Sunday 25 May 2025 03:45:02 +0000 (0:00:00.217) 0:00:23.087 ************ 2025-05-25 03:45:02.501607 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:02.502943 | orchestrator | 2025-05-25 03:45:02.506725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:02.508724 | orchestrator | Sunday 25 May 2025 03:45:02 +0000 (0:00:00.191) 0:00:23.279 ************ 2025-05-25 03:45:02.696503 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:02.697502 | orchestrator | 2025-05-25 03:45:02.704438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:02.704716 | orchestrator | Sunday 25 May 2025 03:45:02 +0000 (0:00:00.194) 0:00:23.473 ************ 2025-05-25 03:45:02.887802 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:02.887900 | orchestrator | 2025-05-25 03:45:02.888160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:02.888514 | orchestrator | Sunday 25 May 2025 03:45:02 +0000 (0:00:00.192) 0:00:23.666 ************ 2025-05-25 03:45:03.533103 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-25 03:45:03.533212 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-25 03:45:03.533642 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-25 03:45:03.534615 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-25 03:45:03.535642 | orchestrator | 2025-05-25 03:45:03.536691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:03.539873 | orchestrator | Sunday 25 May 2025 03:45:03 +0000 (0:00:00.644) 0:00:24.310 ************ 2025-05-25 03:45:03.737596 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:03.740949 | orchestrator | 2025-05-25 03:45:03.741831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:03.745403 | orchestrator | Sunday 25 May 2025 03:45:03 +0000 (0:00:00.203) 0:00:24.514 ************ 2025-05-25 03:45:03.923408 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:03.924002 | orchestrator | 2025-05-25 03:45:03.929352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:03.931639 | orchestrator | Sunday 25 May 2025 03:45:03 +0000 (0:00:00.188) 0:00:24.703 ************ 2025-05-25 03:45:04.109711 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:04.110004 | orchestrator | 2025-05-25 03:45:04.111160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:04.111926 | orchestrator | Sunday 25 May 2025 03:45:04 +0000 (0:00:00.185) 0:00:24.888 ************ 2025-05-25 03:45:04.316758 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:04.318210 | orchestrator | 2025-05-25 03:45:04.319232 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-25 03:45:04.320899 | orchestrator | Sunday 25 May 2025 03:45:04 +0000 (0:00:00.207) 0:00:25.096 ************ 2025-05-25 03:45:04.711192 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-25 03:45:04.713131 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-25 03:45:04.713559 | orchestrator | 2025-05-25 03:45:04.715670 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-25 03:45:04.717450 | orchestrator | Sunday 25 May 2025 03:45:04 +0000 (0:00:00.391) 0:00:25.488 ************ 2025-05-25 03:45:04.857271 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:04.858347 | orchestrator | 2025-05-25 03:45:04.859323 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-25 03:45:04.861463 | orchestrator | Sunday 25 May 2025 03:45:04 +0000 (0:00:00.148) 0:00:25.636 ************ 2025-05-25 03:45:05.004499 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:05.005928 | orchestrator | 2025-05-25 03:45:05.007733 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-25 03:45:05.011483 | orchestrator | Sunday 25 May 2025 03:45:04 +0000 (0:00:00.146) 0:00:25.783 ************ 2025-05-25 03:45:05.174926 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:05.175722 | orchestrator | 2025-05-25 03:45:05.176839 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-25 03:45:05.178103 | orchestrator | Sunday 25 May 2025 03:45:05 +0000 (0:00:00.170) 0:00:25.953 ************ 2025-05-25 03:45:05.330823 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:45:05.331001 | orchestrator | 2025-05-25 03:45:05.331499 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-25 03:45:05.331847 | orchestrator | Sunday 25 May 2025 03:45:05 +0000 (0:00:00.157) 0:00:26.111 ************ 2025-05-25 03:45:05.496372 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'}}) 2025-05-25 03:45:05.497207 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '733a1394-dd45-5d63-8d82-63858202edf3'}}) 2025-05-25 03:45:05.498692 | orchestrator | 2025-05-25 03:45:05.500294 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-25 03:45:05.500946 | orchestrator | Sunday 25 May 2025 03:45:05 +0000 (0:00:00.164) 0:00:26.276 ************ 2025-05-25 03:45:05.641563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'}})  2025-05-25 03:45:05.642479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '733a1394-dd45-5d63-8d82-63858202edf3'}})  2025-05-25 03:45:05.644958 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:05.646951 | orchestrator | 2025-05-25 03:45:05.647693 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-25 03:45:05.648817 | orchestrator | Sunday 25 May 2025 03:45:05 +0000 (0:00:00.145) 0:00:26.421 ************ 2025-05-25 03:45:05.807327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'}})  2025-05-25 03:45:05.807856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '733a1394-dd45-5d63-8d82-63858202edf3'}})  2025-05-25 03:45:05.811724 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:05.812550 | orchestrator | 2025-05-25 03:45:05.812795 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-25 03:45:05.813516 | orchestrator | Sunday 25 May 2025 03:45:05 +0000 (0:00:00.159) 0:00:26.580 ************ 2025-05-25 03:45:05.952131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'}})  2025-05-25 03:45:05.953474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '733a1394-dd45-5d63-8d82-63858202edf3'}})  2025-05-25 03:45:05.957318 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:05.957348 | orchestrator | 2025-05-25 03:45:05.957361 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-25 03:45:05.957374 | orchestrator | Sunday 25 May 2025 03:45:05 +0000 (0:00:00.149) 0:00:26.730 ************ 2025-05-25 03:45:06.097551 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:45:06.097649 | orchestrator | 2025-05-25 03:45:06.098221 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-25 03:45:06.099100 | orchestrator | Sunday 25 May 2025 03:45:06 +0000 (0:00:00.146) 0:00:26.877 ************ 2025-05-25 03:45:06.258773 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:45:06.259348 | orchestrator | 2025-05-25 03:45:06.260858 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-25 03:45:06.265128 | orchestrator | Sunday 25 May 2025 03:45:06 +0000 (0:00:00.160) 0:00:27.037 ************ 2025-05-25 03:45:06.404365 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:06.406221 | orchestrator | 2025-05-25 03:45:06.410841 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-25 03:45:06.411724 | orchestrator | Sunday 25 May 2025 03:45:06 +0000 (0:00:00.145) 0:00:27.183 ************ 2025-05-25 03:45:06.714198 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:06.714852 | orchestrator | 2025-05-25 03:45:06.715059 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-25 03:45:06.716351 | orchestrator | Sunday 25 May 2025 03:45:06 +0000 (0:00:00.310) 0:00:27.493 ************ 2025-05-25 03:45:06.851149 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:06.852099 | orchestrator | 2025-05-25 03:45:06.856469 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-25 03:45:06.857381 | orchestrator | Sunday 25 May 2025 03:45:06 +0000 (0:00:00.136) 0:00:27.630 ************ 2025-05-25 03:45:06.999606 | orchestrator | ok: [testbed-node-4] => { 2025-05-25 03:45:07.000307 | orchestrator |  "ceph_osd_devices": { 2025-05-25 03:45:07.002282 | orchestrator |  "sdb": { 2025-05-25 03:45:07.006113 | orchestrator |  "osd_lvm_uuid": "02ca1cf7-fa58-5bc0-a798-b7d21582c1b0" 2025-05-25 03:45:07.007556 | orchestrator |  }, 2025-05-25 03:45:07.009178 | orchestrator |  "sdc": { 2025-05-25 03:45:07.010599 | orchestrator |  "osd_lvm_uuid": "733a1394-dd45-5d63-8d82-63858202edf3" 2025-05-25 03:45:07.011399 | orchestrator |  } 2025-05-25 03:45:07.012546 | orchestrator |  } 2025-05-25 03:45:07.013760 | orchestrator | } 2025-05-25 03:45:07.014680 | orchestrator | 2025-05-25 03:45:07.015632 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-25 03:45:07.016766 | orchestrator | Sunday 25 May 2025 03:45:06 +0000 (0:00:00.148) 0:00:27.778 ************ 2025-05-25 03:45:07.132744 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:07.133184 | orchestrator | 2025-05-25 03:45:07.135476 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-25 03:45:07.139179 | orchestrator | Sunday 25 May 2025 03:45:07 +0000 (0:00:00.134) 0:00:27.912 ************ 2025-05-25 03:45:07.260393 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:07.261458 | orchestrator | 2025-05-25 03:45:07.262303 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-25 03:45:07.266775 | orchestrator | Sunday 25 May 2025 03:45:07 +0000 (0:00:00.127) 0:00:28.039 ************ 2025-05-25 03:45:07.389536 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:45:07.390388 | orchestrator | 2025-05-25 03:45:07.391699 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-25 03:45:07.395062 | orchestrator | Sunday 25 May 2025 03:45:07 +0000 (0:00:00.129) 0:00:28.169 ************ 2025-05-25 03:45:07.579741 | orchestrator | changed: [testbed-node-4] => { 2025-05-25 03:45:07.580630 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-25 03:45:07.582198 | orchestrator |  "ceph_osd_devices": { 2025-05-25 03:45:07.583056 | orchestrator |  "sdb": { 2025-05-25 03:45:07.584312 | orchestrator |  "osd_lvm_uuid": "02ca1cf7-fa58-5bc0-a798-b7d21582c1b0" 2025-05-25 03:45:07.586371 | orchestrator |  }, 2025-05-25 03:45:07.588054 | orchestrator |  "sdc": { 2025-05-25 03:45:07.590186 | orchestrator |  "osd_lvm_uuid": "733a1394-dd45-5d63-8d82-63858202edf3" 2025-05-25 03:45:07.593538 | orchestrator |  } 2025-05-25 03:45:07.594305 | orchestrator |  }, 2025-05-25 03:45:07.595129 | orchestrator |  "lvm_volumes": [ 2025-05-25 03:45:07.596012 | orchestrator |  { 2025-05-25 03:45:07.596937 | orchestrator |  "data": "osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0", 2025-05-25 03:45:07.597714 | orchestrator |  "data_vg": "ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0" 2025-05-25 03:45:07.598890 | orchestrator |  }, 2025-05-25 03:45:07.599040 | orchestrator |  { 2025-05-25 03:45:07.600345 | orchestrator |  "data": "osd-block-733a1394-dd45-5d63-8d82-63858202edf3", 2025-05-25 03:45:07.603375 | orchestrator |  "data_vg": "ceph-733a1394-dd45-5d63-8d82-63858202edf3" 2025-05-25 03:45:07.604721 | orchestrator |  } 2025-05-25 03:45:07.605358 | orchestrator |  ] 2025-05-25 03:45:07.606130 | orchestrator |  } 2025-05-25 03:45:07.606925 | orchestrator | } 2025-05-25 03:45:07.607573 | orchestrator | 2025-05-25 03:45:07.609561 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-25 03:45:07.610185 | orchestrator | Sunday 25 May 2025 03:45:07 +0000 (0:00:00.188) 0:00:28.358 ************ 2025-05-25 03:45:08.558263 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-25 03:45:08.558430 | orchestrator | 2025-05-25 03:45:08.558827 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-25 03:45:08.559716 | orchestrator | 2025-05-25 03:45:08.560701 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-25 03:45:08.561717 | orchestrator | Sunday 25 May 2025 03:45:08 +0000 (0:00:00.976) 0:00:29.335 ************ 2025-05-25 03:45:08.915522 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-25 03:45:08.917491 | orchestrator | 2025-05-25 03:45:08.918158 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-25 03:45:08.919130 | orchestrator | Sunday 25 May 2025 03:45:08 +0000 (0:00:00.359) 0:00:29.694 ************ 2025-05-25 03:45:09.386164 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:45:09.386644 | orchestrator | 2025-05-25 03:45:09.390948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:45:09.391970 | orchestrator | Sunday 25 May 2025 03:45:09 +0000 (0:00:00.471) 0:00:30.166 ************ 2025-05-25 03:45:09.698347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-25 03:45:09.698922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-25 03:45:09.701926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-25 03:45:09.702571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-25 03:45:09.703405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-25 03:45:09.703915 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-25 03:45:09.704631 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-25 03:45:09.705342 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-25 03:45:09.705870 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-25 03:45:09.706583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-25 03:45:09.707297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-25 03:45:09.711204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-25 03:45:09.711268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-25 03:45:09.711785 | orchestrator | 2025-05-25 03:45:09.712190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:45:09.712730 | orchestrator | Sunday 25 May 2025 03:45:09 +0000 (0:00:00.312) 0:00:30.479 ************ 2025-05-25 03:45:09.887496 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:09.888518 | orchestrator | 2025-05-25 03:45:09.889061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:45:09.892314 | orchestrator | Sunday 25 May 2025 03:45:09 +0000 (0:00:00.189) 0:00:30.668 ************ 2025-05-25 03:45:10.071425 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:10.071765 | orchestrator | 2025-05-25 03:45:10.075365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:45:10.075640 | orchestrator | Sunday 25 May 2025 03:45:10 +0000 (0:00:00.183) 0:00:30.851 ************ 2025-05-25 03:45:10.265052 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:10.266279 | orchestrator | 2025-05-25 03:45:10.270705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:45:10.270756 | orchestrator | Sunday 25 May 2025 03:45:10 +0000 (0:00:00.194) 0:00:31.045 ************ 2025-05-25 03:45:10.455243 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:10.456703 | orchestrator | 2025-05-25 03:45:10.458232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:45:10.459118 | orchestrator | Sunday 25 May 2025 03:45:10 +0000 (0:00:00.187) 0:00:31.233 ************ 2025-05-25 03:45:10.627836 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:10.629294 | orchestrator | 2025-05-25 03:45:10.631460 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:45:10.633448 | orchestrator | Sunday 25 May 2025 03:45:10 +0000 (0:00:00.172) 0:00:31.406 ************ 2025-05-25 03:45:10.786660 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:10.787863 | orchestrator | 2025-05-25 03:45:10.788894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:45:10.790273 | orchestrator | Sunday 25 May 2025 03:45:10 +0000 (0:00:00.158) 0:00:31.564 ************ 2025-05-25 03:45:10.980901 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:10.982067 | orchestrator | 2025-05-25 03:45:10.984380 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:45:10.986295 | orchestrator | Sunday 25 May 2025 03:45:10 +0000 (0:00:00.195) 0:00:31.759 ************ 2025-05-25 03:45:11.176700 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:11.177253 | orchestrator | 2025-05-25 03:45:11.178580 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:45:11.180680 | orchestrator | Sunday 25 May 2025 03:45:11 +0000 (0:00:00.196) 0:00:31.956 ************ 2025-05-25 03:45:11.663793 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0) 2025-05-25 03:45:11.663892 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0) 2025-05-25 03:45:11.665278 | orchestrator | 2025-05-25 03:45:11.669711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:45:11.670013 | orchestrator | Sunday 25 May 2025 03:45:11 +0000 (0:00:00.485) 0:00:32.441 ************ 2025-05-25 03:45:12.308745 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_201f277c-fdb2-416e-b305-0d8ba90b32cd) 2025-05-25 03:45:12.308861 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_201f277c-fdb2-416e-b305-0d8ba90b32cd) 2025-05-25 03:45:12.311890 | orchestrator | 2025-05-25 03:45:12.311929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:45:12.313060 | orchestrator | Sunday 25 May 2025 03:45:12 +0000 (0:00:00.642) 0:00:33.084 ************ 2025-05-25 03:45:12.722278 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8968a7f7-851b-405b-80f4-de48ab1dffee) 2025-05-25 03:45:12.722912 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8968a7f7-851b-405b-80f4-de48ab1dffee) 2025-05-25 03:45:12.723656 | orchestrator | 2025-05-25 03:45:12.724840 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:45:12.725236 | orchestrator | Sunday 25 May 2025 03:45:12 +0000 (0:00:00.418) 0:00:33.502 ************ 2025-05-25 03:45:13.145369 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_603d0154-8a06-450e-a743-756d85b1bc6a) 2025-05-25 03:45:13.145812 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_603d0154-8a06-450e-a743-756d85b1bc6a) 2025-05-25 03:45:13.146431 | orchestrator | 2025-05-25 03:45:13.147155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:45:13.147984 | orchestrator | Sunday 25 May 2025 03:45:13 +0000 (0:00:00.417) 0:00:33.920 ************ 2025-05-25 03:45:13.464737 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-25 03:45:13.465465 | orchestrator | 2025-05-25 03:45:13.466465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:13.466971 | orchestrator | Sunday 25 May 2025 03:45:13 +0000 (0:00:00.323) 0:00:34.243 ************ 2025-05-25 03:45:13.844550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-25 03:45:13.844686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-25 03:45:13.846156 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-25 03:45:13.847899 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-25 03:45:13.848668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-25 03:45:13.849360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-25 03:45:13.850415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-25 03:45:13.851024 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-25 03:45:13.851046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-25 03:45:13.851902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-25 03:45:13.852976 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-25 03:45:13.853776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-25 03:45:13.854003 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-25 03:45:13.854478 | orchestrator | 2025-05-25 03:45:13.855384 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:13.855730 | orchestrator | Sunday 25 May 2025 03:45:13 +0000 (0:00:00.379) 0:00:34.623 ************ 2025-05-25 03:45:14.053451 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:14.053550 | orchestrator | 2025-05-25 03:45:14.054362 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:14.055101 | orchestrator | Sunday 25 May 2025 03:45:14 +0000 (0:00:00.209) 0:00:34.832 ************ 2025-05-25 03:45:14.277710 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:14.278315 | orchestrator | 2025-05-25 03:45:14.278891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:14.280326 | orchestrator | Sunday 25 May 2025 03:45:14 +0000 (0:00:00.222) 0:00:35.055 ************ 2025-05-25 03:45:14.486261 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:14.487230 | orchestrator | 2025-05-25 03:45:14.489105 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:14.491769 | orchestrator | Sunday 25 May 2025 03:45:14 +0000 (0:00:00.209) 0:00:35.264 ************ 2025-05-25 03:45:14.700869 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:14.701361 | orchestrator | 2025-05-25 03:45:14.702214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:14.702564 | orchestrator | Sunday 25 May 2025 03:45:14 +0000 (0:00:00.215) 0:00:35.480 ************ 2025-05-25 03:45:14.900012 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:14.900267 | orchestrator | 2025-05-25 03:45:14.901005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:14.901904 | orchestrator | Sunday 25 May 2025 03:45:14 +0000 (0:00:00.196) 0:00:35.677 ************ 2025-05-25 03:45:15.533495 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:15.533598 | orchestrator | 2025-05-25 03:45:15.534555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:15.535440 | orchestrator | Sunday 25 May 2025 03:45:15 +0000 (0:00:00.630) 0:00:36.307 ************ 2025-05-25 03:45:15.728323 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:15.733032 | orchestrator | 2025-05-25 03:45:15.733154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:15.735644 | orchestrator | Sunday 25 May 2025 03:45:15 +0000 (0:00:00.199) 0:00:36.506 ************ 2025-05-25 03:45:15.934197 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:15.934645 | orchestrator | 2025-05-25 03:45:15.935426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:15.937515 | orchestrator | Sunday 25 May 2025 03:45:15 +0000 (0:00:00.205) 0:00:36.712 ************ 2025-05-25 03:45:16.599844 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-25 03:45:16.600626 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-25 03:45:16.601486 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-25 03:45:16.602918 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-25 03:45:16.603631 | orchestrator | 2025-05-25 03:45:16.604573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:16.606142 | orchestrator | Sunday 25 May 2025 03:45:16 +0000 (0:00:00.667) 0:00:37.379 ************ 2025-05-25 03:45:16.804119 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:16.804399 | orchestrator | 2025-05-25 03:45:16.805635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:16.808261 | orchestrator | Sunday 25 May 2025 03:45:16 +0000 (0:00:00.204) 0:00:37.583 ************ 2025-05-25 03:45:17.023160 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:17.023381 | orchestrator | 2025-05-25 03:45:17.024768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:17.027165 | orchestrator | Sunday 25 May 2025 03:45:17 +0000 (0:00:00.217) 0:00:37.801 ************ 2025-05-25 03:45:17.228992 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:17.229269 | orchestrator | 2025-05-25 03:45:17.230339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:45:17.231193 | orchestrator | Sunday 25 May 2025 03:45:17 +0000 (0:00:00.206) 0:00:38.008 ************ 2025-05-25 03:45:17.424152 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:17.424567 | orchestrator | 2025-05-25 03:45:17.425729 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-25 03:45:17.426453 | orchestrator | Sunday 25 May 2025 03:45:17 +0000 (0:00:00.195) 0:00:38.203 ************ 2025-05-25 03:45:17.591544 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-25 03:45:17.592653 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-25 03:45:17.593506 | orchestrator | 2025-05-25 03:45:17.595643 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-25 03:45:17.595735 | orchestrator | Sunday 25 May 2025 03:45:17 +0000 (0:00:00.166) 0:00:38.370 ************ 2025-05-25 03:45:17.729759 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:17.730162 | orchestrator | 2025-05-25 03:45:17.730775 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-25 03:45:17.731589 | orchestrator | Sunday 25 May 2025 03:45:17 +0000 (0:00:00.138) 0:00:38.508 ************ 2025-05-25 03:45:17.866182 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:17.867248 | orchestrator | 2025-05-25 03:45:17.868139 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-25 03:45:17.868563 | orchestrator | Sunday 25 May 2025 03:45:17 +0000 (0:00:00.135) 0:00:38.644 ************ 2025-05-25 03:45:17.999258 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:18.000911 | orchestrator | 2025-05-25 03:45:18.001464 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-25 03:45:18.002261 | orchestrator | Sunday 25 May 2025 03:45:17 +0000 (0:00:00.133) 0:00:38.778 ************ 2025-05-25 03:45:18.341939 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:45:18.342790 | orchestrator | 2025-05-25 03:45:18.343974 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-25 03:45:18.345036 | orchestrator | Sunday 25 May 2025 03:45:18 +0000 (0:00:00.343) 0:00:39.121 ************ 2025-05-25 03:45:18.532503 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33e996ff-67e1-5789-9eb3-97043475c088'}}) 2025-05-25 03:45:18.533002 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3ece5568-3437-595e-b3ba-b2f91a77c86c'}}) 2025-05-25 03:45:18.534313 | orchestrator | 2025-05-25 03:45:18.534455 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-25 03:45:18.535662 | orchestrator | Sunday 25 May 2025 03:45:18 +0000 (0:00:00.189) 0:00:39.310 ************ 2025-05-25 03:45:18.703663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33e996ff-67e1-5789-9eb3-97043475c088'}})  2025-05-25 03:45:18.703790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3ece5568-3437-595e-b3ba-b2f91a77c86c'}})  2025-05-25 03:45:18.704413 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:18.705321 | orchestrator | 2025-05-25 03:45:18.705941 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-25 03:45:18.706627 | orchestrator | Sunday 25 May 2025 03:45:18 +0000 (0:00:00.172) 0:00:39.483 ************ 2025-05-25 03:45:18.883776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33e996ff-67e1-5789-9eb3-97043475c088'}})  2025-05-25 03:45:18.884853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3ece5568-3437-595e-b3ba-b2f91a77c86c'}})  2025-05-25 03:45:18.885213 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:18.887780 | orchestrator | 2025-05-25 03:45:18.889335 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-25 03:45:18.890195 | orchestrator | Sunday 25 May 2025 03:45:18 +0000 (0:00:00.179) 0:00:39.662 ************ 2025-05-25 03:45:19.042116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33e996ff-67e1-5789-9eb3-97043475c088'}})  2025-05-25 03:45:19.042922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3ece5568-3437-595e-b3ba-b2f91a77c86c'}})  2025-05-25 03:45:19.045535 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:19.046526 | orchestrator | 2025-05-25 03:45:19.047491 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-25 03:45:19.048515 | orchestrator | Sunday 25 May 2025 03:45:19 +0000 (0:00:00.156) 0:00:39.819 ************ 2025-05-25 03:45:19.178157 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:45:19.178571 | orchestrator | 2025-05-25 03:45:19.179391 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-25 03:45:19.179603 | orchestrator | Sunday 25 May 2025 03:45:19 +0000 (0:00:00.138) 0:00:39.958 ************ 2025-05-25 03:45:19.325239 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:45:19.327978 | orchestrator | 2025-05-25 03:45:19.328654 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-25 03:45:19.329693 | orchestrator | Sunday 25 May 2025 03:45:19 +0000 (0:00:00.146) 0:00:40.104 ************ 2025-05-25 03:45:19.453226 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:19.453788 | orchestrator | 2025-05-25 03:45:19.455374 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-25 03:45:19.457060 | orchestrator | Sunday 25 May 2025 03:45:19 +0000 (0:00:00.128) 0:00:40.232 ************ 2025-05-25 03:45:19.579450 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:19.579668 | orchestrator | 2025-05-25 03:45:19.580710 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-25 03:45:19.582646 | orchestrator | Sunday 25 May 2025 03:45:19 +0000 (0:00:00.125) 0:00:40.358 ************ 2025-05-25 03:45:19.712858 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:19.713068 | orchestrator | 2025-05-25 03:45:19.714463 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-25 03:45:19.715720 | orchestrator | Sunday 25 May 2025 03:45:19 +0000 (0:00:00.134) 0:00:40.493 ************ 2025-05-25 03:45:19.863641 | orchestrator | ok: [testbed-node-5] => { 2025-05-25 03:45:19.864122 | orchestrator |  "ceph_osd_devices": { 2025-05-25 03:45:19.865279 | orchestrator |  "sdb": { 2025-05-25 03:45:19.866873 | orchestrator |  "osd_lvm_uuid": "33e996ff-67e1-5789-9eb3-97043475c088" 2025-05-25 03:45:19.867718 | orchestrator |  }, 2025-05-25 03:45:19.868294 | orchestrator |  "sdc": { 2025-05-25 03:45:19.868999 | orchestrator |  "osd_lvm_uuid": "3ece5568-3437-595e-b3ba-b2f91a77c86c" 2025-05-25 03:45:19.869720 | orchestrator |  } 2025-05-25 03:45:19.870155 | orchestrator |  } 2025-05-25 03:45:19.870763 | orchestrator | } 2025-05-25 03:45:19.871197 | orchestrator | 2025-05-25 03:45:19.871889 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-25 03:45:19.872188 | orchestrator | Sunday 25 May 2025 03:45:19 +0000 (0:00:00.149) 0:00:40.642 ************ 2025-05-25 03:45:20.002312 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:20.002909 | orchestrator | 2025-05-25 03:45:20.003997 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-25 03:45:20.004864 | orchestrator | Sunday 25 May 2025 03:45:19 +0000 (0:00:00.135) 0:00:40.778 ************ 2025-05-25 03:45:20.321706 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:20.325045 | orchestrator | 2025-05-25 03:45:20.326791 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-25 03:45:20.327170 | orchestrator | Sunday 25 May 2025 03:45:20 +0000 (0:00:00.318) 0:00:41.096 ************ 2025-05-25 03:45:20.443638 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:45:20.443741 | orchestrator | 2025-05-25 03:45:20.446325 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-25 03:45:20.446364 | orchestrator | Sunday 25 May 2025 03:45:20 +0000 (0:00:00.123) 0:00:41.219 ************ 2025-05-25 03:45:20.645118 | orchestrator | changed: [testbed-node-5] => { 2025-05-25 03:45:20.646207 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-25 03:45:20.646785 | orchestrator |  "ceph_osd_devices": { 2025-05-25 03:45:20.647284 | orchestrator |  "sdb": { 2025-05-25 03:45:20.647945 | orchestrator |  "osd_lvm_uuid": "33e996ff-67e1-5789-9eb3-97043475c088" 2025-05-25 03:45:20.648721 | orchestrator |  }, 2025-05-25 03:45:20.649897 | orchestrator |  "sdc": { 2025-05-25 03:45:20.652780 | orchestrator |  "osd_lvm_uuid": "3ece5568-3437-595e-b3ba-b2f91a77c86c" 2025-05-25 03:45:20.652836 | orchestrator |  } 2025-05-25 03:45:20.652850 | orchestrator |  }, 2025-05-25 03:45:20.652861 | orchestrator |  "lvm_volumes": [ 2025-05-25 03:45:20.653785 | orchestrator |  { 2025-05-25 03:45:20.655163 | orchestrator |  "data": "osd-block-33e996ff-67e1-5789-9eb3-97043475c088", 2025-05-25 03:45:20.655998 | orchestrator |  "data_vg": "ceph-33e996ff-67e1-5789-9eb3-97043475c088" 2025-05-25 03:45:20.657296 | orchestrator |  }, 2025-05-25 03:45:20.657905 | orchestrator |  { 2025-05-25 03:45:20.658742 | orchestrator |  "data": "osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c", 2025-05-25 03:45:20.659241 | orchestrator |  "data_vg": "ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c" 2025-05-25 03:45:20.660100 | orchestrator |  } 2025-05-25 03:45:20.660457 | orchestrator |  ] 2025-05-25 03:45:20.661396 | orchestrator |  } 2025-05-25 03:45:20.661747 | orchestrator | } 2025-05-25 03:45:20.662420 | orchestrator | 2025-05-25 03:45:20.662996 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-25 03:45:20.664052 | orchestrator | Sunday 25 May 2025 03:45:20 +0000 (0:00:00.205) 0:00:41.425 ************ 2025-05-25 03:45:21.600918 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-25 03:45:21.601357 | orchestrator | 2025-05-25 03:45:21.605992 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:45:21.606134 | orchestrator | 2025-05-25 03:45:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:45:21.606153 | orchestrator | 2025-05-25 03:45:21 | INFO  | Please wait and do not abort execution. 2025-05-25 03:45:21.606514 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-25 03:45:21.606904 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-25 03:45:21.607711 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-25 03:45:21.608229 | orchestrator | 2025-05-25 03:45:21.608629 | orchestrator | 2025-05-25 03:45:21.609026 | orchestrator | 2025-05-25 03:45:21.609379 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:45:21.609953 | orchestrator | Sunday 25 May 2025 03:45:21 +0000 (0:00:00.953) 0:00:42.379 ************ 2025-05-25 03:45:21.610249 | orchestrator | =============================================================================== 2025-05-25 03:45:21.610785 | orchestrator | Write configuration file ------------------------------------------------ 4.15s 2025-05-25 03:45:21.611347 | orchestrator | Add known links to the list of available block devices ------------------ 1.15s 2025-05-25 03:45:21.612024 | orchestrator | Add known partitions to the list of available block devices ------------- 1.15s 2025-05-25 03:45:21.612210 | orchestrator | Add known partitions to the list of available block devices ------------- 1.08s 2025-05-25 03:45:21.612674 | orchestrator | Get initial list of available block devices ----------------------------- 0.92s 2025-05-25 03:45:21.613024 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.87s 2025-05-25 03:45:21.613713 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-05-25 03:45:21.614304 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-05-25 03:45:21.614327 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.80s 2025-05-25 03:45:21.614648 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.76s 2025-05-25 03:45:21.614872 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.72s 2025-05-25 03:45:21.615291 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-05-25 03:45:21.615544 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-05-25 03:45:21.616278 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-05-25 03:45:21.616524 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-05-25 03:45:21.616849 | orchestrator | Print DB devices -------------------------------------------------------- 0.64s 2025-05-25 03:45:21.617234 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-05-25 03:45:21.617969 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-05-25 03:45:21.618463 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-05-25 03:45:21.619173 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.58s 2025-05-25 03:45:33.584628 | orchestrator | 2025-05-25 03:45:33 | INFO  | Task d9648a65-b34c-454b-b397-22b38bf1bfaf (sync inventory) is running in background. Output coming soon. 2025-05-25 03:46:18.024713 | orchestrator | 2025-05-25 03:46:01 | INFO  | Starting group_vars file reorganization 2025-05-25 03:46:18.024831 | orchestrator | 2025-05-25 03:46:01 | INFO  | Moved 0 file(s) to their respective directories 2025-05-25 03:46:18.024848 | orchestrator | 2025-05-25 03:46:01 | INFO  | Group_vars file reorganization completed 2025-05-25 03:46:18.024860 | orchestrator | 2025-05-25 03:46:03 | INFO  | Starting variable preparation from inventory 2025-05-25 03:46:18.024872 | orchestrator | 2025-05-25 03:46:04 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-25 03:46:18.024883 | orchestrator | 2025-05-25 03:46:04 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-25 03:46:18.024895 | orchestrator | 2025-05-25 03:46:04 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-25 03:46:18.024906 | orchestrator | 2025-05-25 03:46:04 | INFO  | 3 file(s) written, 6 host(s) processed 2025-05-25 03:46:18.024917 | orchestrator | 2025-05-25 03:46:04 | INFO  | Variable preparation completed: 2025-05-25 03:46:18.024928 | orchestrator | 2025-05-25 03:46:05 | INFO  | Starting inventory overwrite handling 2025-05-25 03:46:18.024939 | orchestrator | 2025-05-25 03:46:05 | INFO  | Handling group overwrites in 99-overwrite 2025-05-25 03:46:18.024950 | orchestrator | 2025-05-25 03:46:05 | INFO  | Removing group frr:children from 60-generic 2025-05-25 03:46:18.024997 | orchestrator | 2025-05-25 03:46:05 | INFO  | Removing group storage:children from 50-kolla 2025-05-25 03:46:18.025010 | orchestrator | 2025-05-25 03:46:05 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-25 03:46:18.025021 | orchestrator | 2025-05-25 03:46:05 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-25 03:46:18.025032 | orchestrator | 2025-05-25 03:46:05 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-25 03:46:18.025043 | orchestrator | 2025-05-25 03:46:05 | INFO  | Handling group overwrites in 20-roles 2025-05-25 03:46:18.025053 | orchestrator | 2025-05-25 03:46:05 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-25 03:46:18.025064 | orchestrator | 2025-05-25 03:46:05 | INFO  | Removed 6 group(s) in total 2025-05-25 03:46:18.025080 | orchestrator | 2025-05-25 03:46:05 | INFO  | Inventory overwrite handling completed 2025-05-25 03:46:18.025140 | orchestrator | 2025-05-25 03:46:06 | INFO  | Starting merge of inventory files 2025-05-25 03:46:18.025161 | orchestrator | 2025-05-25 03:46:06 | INFO  | Inventory files merged successfully 2025-05-25 03:46:18.025181 | orchestrator | 2025-05-25 03:46:10 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-05-25 03:46:18.025199 | orchestrator | 2025-05-25 03:46:17 | INFO  | Successfully wrote ClusterShell configuration 2025-05-25 03:46:19.934347 | orchestrator | 2025-05-25 03:46:19 | INFO  | Task 474906e9-07ad-4226-bca4-06eac07672d4 (ceph-create-lvm-devices) was prepared for execution. 2025-05-25 03:46:19.934474 | orchestrator | 2025-05-25 03:46:19 | INFO  | It takes a moment until task 474906e9-07ad-4226-bca4-06eac07672d4 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-25 03:46:24.195663 | orchestrator | 2025-05-25 03:46:24.198589 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-25 03:46:24.199831 | orchestrator | 2025-05-25 03:46:24.200758 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-25 03:46:24.201923 | orchestrator | Sunday 25 May 2025 03:46:24 +0000 (0:00:00.303) 0:00:00.304 ************ 2025-05-25 03:46:24.433329 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-25 03:46:24.435652 | orchestrator | 2025-05-25 03:46:24.437572 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-25 03:46:24.438361 | orchestrator | Sunday 25 May 2025 03:46:24 +0000 (0:00:00.240) 0:00:00.544 ************ 2025-05-25 03:46:24.658314 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:46:24.658437 | orchestrator | 2025-05-25 03:46:24.662207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:24.665639 | orchestrator | Sunday 25 May 2025 03:46:24 +0000 (0:00:00.223) 0:00:00.768 ************ 2025-05-25 03:46:25.062310 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-25 03:46:25.062463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-25 03:46:25.062730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-25 03:46:25.063174 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-25 03:46:25.063510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-25 03:46:25.063956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-25 03:46:25.065375 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-25 03:46:25.067562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-25 03:46:25.070219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-25 03:46:25.073177 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-25 03:46:25.073920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-25 03:46:25.074749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-25 03:46:25.078310 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-25 03:46:25.080080 | orchestrator | 2025-05-25 03:46:25.080672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:25.081420 | orchestrator | Sunday 25 May 2025 03:46:25 +0000 (0:00:00.407) 0:00:01.175 ************ 2025-05-25 03:46:25.507707 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:25.508241 | orchestrator | 2025-05-25 03:46:25.511403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:25.511496 | orchestrator | Sunday 25 May 2025 03:46:25 +0000 (0:00:00.443) 0:00:01.619 ************ 2025-05-25 03:46:25.697280 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:25.697384 | orchestrator | 2025-05-25 03:46:25.700139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:25.700725 | orchestrator | Sunday 25 May 2025 03:46:25 +0000 (0:00:00.188) 0:00:01.808 ************ 2025-05-25 03:46:25.896774 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:25.897180 | orchestrator | 2025-05-25 03:46:25.899338 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:25.899881 | orchestrator | Sunday 25 May 2025 03:46:25 +0000 (0:00:00.200) 0:00:02.009 ************ 2025-05-25 03:46:26.104147 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:26.105268 | orchestrator | 2025-05-25 03:46:26.105734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:26.106526 | orchestrator | Sunday 25 May 2025 03:46:26 +0000 (0:00:00.203) 0:00:02.213 ************ 2025-05-25 03:46:26.305798 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:26.306580 | orchestrator | 2025-05-25 03:46:26.308820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:26.311302 | orchestrator | Sunday 25 May 2025 03:46:26 +0000 (0:00:00.203) 0:00:02.416 ************ 2025-05-25 03:46:26.511996 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:26.512526 | orchestrator | 2025-05-25 03:46:26.513409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:26.514612 | orchestrator | Sunday 25 May 2025 03:46:26 +0000 (0:00:00.207) 0:00:02.624 ************ 2025-05-25 03:46:26.731461 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:26.731564 | orchestrator | 2025-05-25 03:46:26.731579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:26.733556 | orchestrator | Sunday 25 May 2025 03:46:26 +0000 (0:00:00.218) 0:00:02.842 ************ 2025-05-25 03:46:26.926442 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:26.928167 | orchestrator | 2025-05-25 03:46:26.929629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:26.931347 | orchestrator | Sunday 25 May 2025 03:46:26 +0000 (0:00:00.195) 0:00:03.037 ************ 2025-05-25 03:46:27.333150 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7) 2025-05-25 03:46:27.333247 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7) 2025-05-25 03:46:27.334776 | orchestrator | 2025-05-25 03:46:27.335676 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:27.336448 | orchestrator | Sunday 25 May 2025 03:46:27 +0000 (0:00:00.405) 0:00:03.443 ************ 2025-05-25 03:46:27.740484 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cdfa8505-de86-48ff-8ed6-b6e1381a94b2) 2025-05-25 03:46:27.740936 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cdfa8505-de86-48ff-8ed6-b6e1381a94b2) 2025-05-25 03:46:27.741830 | orchestrator | 2025-05-25 03:46:27.742225 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:27.742621 | orchestrator | Sunday 25 May 2025 03:46:27 +0000 (0:00:00.407) 0:00:03.851 ************ 2025-05-25 03:46:28.360376 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4276f8fa-1a41-4d3c-8190-a1d2d3b80049) 2025-05-25 03:46:28.361416 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4276f8fa-1a41-4d3c-8190-a1d2d3b80049) 2025-05-25 03:46:28.365184 | orchestrator | 2025-05-25 03:46:28.366421 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:28.367386 | orchestrator | Sunday 25 May 2025 03:46:28 +0000 (0:00:00.619) 0:00:04.471 ************ 2025-05-25 03:46:29.186931 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dac67b12-4a3b-49b0-a18f-dd9740769fda) 2025-05-25 03:46:29.187040 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dac67b12-4a3b-49b0-a18f-dd9740769fda) 2025-05-25 03:46:29.187054 | orchestrator | 2025-05-25 03:46:29.187591 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:29.187919 | orchestrator | Sunday 25 May 2025 03:46:29 +0000 (0:00:00.824) 0:00:05.296 ************ 2025-05-25 03:46:29.509088 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-25 03:46:29.509830 | orchestrator | 2025-05-25 03:46:29.510631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:29.511557 | orchestrator | Sunday 25 May 2025 03:46:29 +0000 (0:00:00.324) 0:00:05.620 ************ 2025-05-25 03:46:29.921577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-25 03:46:29.921676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-25 03:46:29.922390 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-25 03:46:29.925737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-25 03:46:29.926406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-25 03:46:29.928364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-25 03:46:29.929813 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-25 03:46:29.931421 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-25 03:46:29.932677 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-25 03:46:29.933851 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-25 03:46:29.934930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-25 03:46:29.936258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-25 03:46:29.937730 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-25 03:46:29.938845 | orchestrator | 2025-05-25 03:46:29.939920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:29.940765 | orchestrator | Sunday 25 May 2025 03:46:29 +0000 (0:00:00.411) 0:00:06.031 ************ 2025-05-25 03:46:30.133523 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:30.136349 | orchestrator | 2025-05-25 03:46:30.136422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:30.136444 | orchestrator | Sunday 25 May 2025 03:46:30 +0000 (0:00:00.213) 0:00:06.245 ************ 2025-05-25 03:46:30.329819 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:30.330755 | orchestrator | 2025-05-25 03:46:30.331258 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:30.332222 | orchestrator | Sunday 25 May 2025 03:46:30 +0000 (0:00:00.197) 0:00:06.443 ************ 2025-05-25 03:46:30.533059 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:30.533200 | orchestrator | 2025-05-25 03:46:30.533303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:30.534257 | orchestrator | Sunday 25 May 2025 03:46:30 +0000 (0:00:00.199) 0:00:06.642 ************ 2025-05-25 03:46:30.725812 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:30.727576 | orchestrator | 2025-05-25 03:46:30.731454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:30.731485 | orchestrator | Sunday 25 May 2025 03:46:30 +0000 (0:00:00.195) 0:00:06.838 ************ 2025-05-25 03:46:30.912677 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:30.913216 | orchestrator | 2025-05-25 03:46:30.914311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:30.914916 | orchestrator | Sunday 25 May 2025 03:46:30 +0000 (0:00:00.186) 0:00:07.024 ************ 2025-05-25 03:46:31.116123 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:31.116324 | orchestrator | 2025-05-25 03:46:31.117525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:31.118697 | orchestrator | Sunday 25 May 2025 03:46:31 +0000 (0:00:00.203) 0:00:07.227 ************ 2025-05-25 03:46:31.290989 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:31.291975 | orchestrator | 2025-05-25 03:46:31.295759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:31.295818 | orchestrator | Sunday 25 May 2025 03:46:31 +0000 (0:00:00.172) 0:00:07.400 ************ 2025-05-25 03:46:31.475157 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:31.475659 | orchestrator | 2025-05-25 03:46:31.477290 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:31.480649 | orchestrator | Sunday 25 May 2025 03:46:31 +0000 (0:00:00.186) 0:00:07.586 ************ 2025-05-25 03:46:32.554651 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-25 03:46:32.559318 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-25 03:46:32.560254 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-25 03:46:32.561296 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-25 03:46:32.562373 | orchestrator | 2025-05-25 03:46:32.563825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:32.563864 | orchestrator | Sunday 25 May 2025 03:46:32 +0000 (0:00:01.078) 0:00:08.664 ************ 2025-05-25 03:46:32.744559 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:32.748741 | orchestrator | 2025-05-25 03:46:32.748759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:32.748767 | orchestrator | Sunday 25 May 2025 03:46:32 +0000 (0:00:00.193) 0:00:08.858 ************ 2025-05-25 03:46:32.951465 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:32.952226 | orchestrator | 2025-05-25 03:46:32.953760 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:32.957434 | orchestrator | Sunday 25 May 2025 03:46:32 +0000 (0:00:00.205) 0:00:09.063 ************ 2025-05-25 03:46:33.149888 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:33.151039 | orchestrator | 2025-05-25 03:46:33.152416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:33.156170 | orchestrator | Sunday 25 May 2025 03:46:33 +0000 (0:00:00.198) 0:00:09.262 ************ 2025-05-25 03:46:33.348883 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:33.350972 | orchestrator | 2025-05-25 03:46:33.351078 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-25 03:46:33.351135 | orchestrator | Sunday 25 May 2025 03:46:33 +0000 (0:00:00.198) 0:00:09.461 ************ 2025-05-25 03:46:33.502253 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:33.502861 | orchestrator | 2025-05-25 03:46:33.503995 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-25 03:46:33.504951 | orchestrator | Sunday 25 May 2025 03:46:33 +0000 (0:00:00.151) 0:00:09.612 ************ 2025-05-25 03:46:33.691278 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '02f362e7-7983-50b5-b688-a41104a01860'}}) 2025-05-25 03:46:33.691828 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b24cffad-8a1f-50fd-b816-ada28c3c4ac7'}}) 2025-05-25 03:46:33.692528 | orchestrator | 2025-05-25 03:46:33.693962 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-25 03:46:33.694640 | orchestrator | Sunday 25 May 2025 03:46:33 +0000 (0:00:00.188) 0:00:09.801 ************ 2025-05-25 03:46:35.674152 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'}) 2025-05-25 03:46:35.674600 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'}) 2025-05-25 03:46:35.675545 | orchestrator | 2025-05-25 03:46:35.676320 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-25 03:46:35.677237 | orchestrator | Sunday 25 May 2025 03:46:35 +0000 (0:00:01.983) 0:00:11.785 ************ 2025-05-25 03:46:35.820744 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:35.821137 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:35.821994 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:35.823049 | orchestrator | 2025-05-25 03:46:35.823594 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-25 03:46:35.824184 | orchestrator | Sunday 25 May 2025 03:46:35 +0000 (0:00:00.147) 0:00:11.933 ************ 2025-05-25 03:46:37.263290 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'}) 2025-05-25 03:46:37.264257 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'}) 2025-05-25 03:46:37.264570 | orchestrator | 2025-05-25 03:46:37.265465 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-25 03:46:37.265870 | orchestrator | Sunday 25 May 2025 03:46:37 +0000 (0:00:01.439) 0:00:13.372 ************ 2025-05-25 03:46:37.413134 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:37.413656 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:37.415967 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:37.416160 | orchestrator | 2025-05-25 03:46:37.416259 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-25 03:46:37.418573 | orchestrator | Sunday 25 May 2025 03:46:37 +0000 (0:00:00.152) 0:00:13.524 ************ 2025-05-25 03:46:37.563236 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:37.565985 | orchestrator | 2025-05-25 03:46:37.566173 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-25 03:46:37.566418 | orchestrator | Sunday 25 May 2025 03:46:37 +0000 (0:00:00.149) 0:00:13.674 ************ 2025-05-25 03:46:37.909897 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:37.911306 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:37.913625 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:37.913692 | orchestrator | 2025-05-25 03:46:37.914406 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-25 03:46:37.915243 | orchestrator | Sunday 25 May 2025 03:46:37 +0000 (0:00:00.347) 0:00:14.021 ************ 2025-05-25 03:46:38.031966 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:38.034933 | orchestrator | 2025-05-25 03:46:38.035024 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-25 03:46:38.035887 | orchestrator | Sunday 25 May 2025 03:46:38 +0000 (0:00:00.121) 0:00:14.143 ************ 2025-05-25 03:46:38.189489 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:38.190620 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:38.190960 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:38.191797 | orchestrator | 2025-05-25 03:46:38.192214 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-25 03:46:38.193204 | orchestrator | Sunday 25 May 2025 03:46:38 +0000 (0:00:00.159) 0:00:14.302 ************ 2025-05-25 03:46:38.329198 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:38.329993 | orchestrator | 2025-05-25 03:46:38.331369 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-25 03:46:38.332083 | orchestrator | Sunday 25 May 2025 03:46:38 +0000 (0:00:00.140) 0:00:14.442 ************ 2025-05-25 03:46:38.489505 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:38.491653 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:38.491682 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:38.491868 | orchestrator | 2025-05-25 03:46:38.492231 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-25 03:46:38.492326 | orchestrator | Sunday 25 May 2025 03:46:38 +0000 (0:00:00.158) 0:00:14.601 ************ 2025-05-25 03:46:38.632276 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:46:38.633807 | orchestrator | 2025-05-25 03:46:38.633843 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-25 03:46:38.633857 | orchestrator | Sunday 25 May 2025 03:46:38 +0000 (0:00:00.141) 0:00:14.742 ************ 2025-05-25 03:46:38.793588 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:38.793715 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:38.793732 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:38.793849 | orchestrator | 2025-05-25 03:46:38.794647 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-25 03:46:38.795180 | orchestrator | Sunday 25 May 2025 03:46:38 +0000 (0:00:00.163) 0:00:14.905 ************ 2025-05-25 03:46:38.933349 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:38.934014 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:38.935308 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:38.936182 | orchestrator | 2025-05-25 03:46:38.937073 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-25 03:46:38.938482 | orchestrator | Sunday 25 May 2025 03:46:38 +0000 (0:00:00.140) 0:00:15.045 ************ 2025-05-25 03:46:39.077616 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:39.077785 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:39.079156 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:39.080297 | orchestrator | 2025-05-25 03:46:39.080689 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-25 03:46:39.081897 | orchestrator | Sunday 25 May 2025 03:46:39 +0000 (0:00:00.144) 0:00:15.190 ************ 2025-05-25 03:46:39.215526 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:39.215622 | orchestrator | 2025-05-25 03:46:39.217011 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-25 03:46:39.218395 | orchestrator | Sunday 25 May 2025 03:46:39 +0000 (0:00:00.136) 0:00:15.327 ************ 2025-05-25 03:46:39.353480 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:39.354686 | orchestrator | 2025-05-25 03:46:39.356523 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-25 03:46:39.356993 | orchestrator | Sunday 25 May 2025 03:46:39 +0000 (0:00:00.135) 0:00:15.462 ************ 2025-05-25 03:46:39.482674 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:39.483720 | orchestrator | 2025-05-25 03:46:39.484828 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-25 03:46:39.486178 | orchestrator | Sunday 25 May 2025 03:46:39 +0000 (0:00:00.133) 0:00:15.595 ************ 2025-05-25 03:46:39.822304 | orchestrator | ok: [testbed-node-3] => { 2025-05-25 03:46:39.823905 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-25 03:46:39.825002 | orchestrator | } 2025-05-25 03:46:39.826657 | orchestrator | 2025-05-25 03:46:39.827591 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-25 03:46:39.828854 | orchestrator | Sunday 25 May 2025 03:46:39 +0000 (0:00:00.336) 0:00:15.932 ************ 2025-05-25 03:46:39.969458 | orchestrator | ok: [testbed-node-3] => { 2025-05-25 03:46:39.971239 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-25 03:46:39.972134 | orchestrator | } 2025-05-25 03:46:39.973026 | orchestrator | 2025-05-25 03:46:39.974108 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-25 03:46:39.975333 | orchestrator | Sunday 25 May 2025 03:46:39 +0000 (0:00:00.146) 0:00:16.079 ************ 2025-05-25 03:46:40.115347 | orchestrator | ok: [testbed-node-3] => { 2025-05-25 03:46:40.116817 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-25 03:46:40.117302 | orchestrator | } 2025-05-25 03:46:40.118266 | orchestrator | 2025-05-25 03:46:40.119467 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-25 03:46:40.120076 | orchestrator | Sunday 25 May 2025 03:46:40 +0000 (0:00:00.148) 0:00:16.227 ************ 2025-05-25 03:46:40.735199 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:46:40.736384 | orchestrator | 2025-05-25 03:46:40.737538 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-25 03:46:40.738723 | orchestrator | Sunday 25 May 2025 03:46:40 +0000 (0:00:00.617) 0:00:16.845 ************ 2025-05-25 03:46:41.207611 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:46:41.208672 | orchestrator | 2025-05-25 03:46:41.209339 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-25 03:46:41.210656 | orchestrator | Sunday 25 May 2025 03:46:41 +0000 (0:00:00.474) 0:00:17.320 ************ 2025-05-25 03:46:41.740828 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:46:41.740934 | orchestrator | 2025-05-25 03:46:41.743032 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-25 03:46:41.743060 | orchestrator | Sunday 25 May 2025 03:46:41 +0000 (0:00:00.531) 0:00:17.851 ************ 2025-05-25 03:46:41.881058 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:46:41.881910 | orchestrator | 2025-05-25 03:46:41.883025 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-25 03:46:41.883864 | orchestrator | Sunday 25 May 2025 03:46:41 +0000 (0:00:00.142) 0:00:17.993 ************ 2025-05-25 03:46:41.985577 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:41.986255 | orchestrator | 2025-05-25 03:46:41.987279 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-25 03:46:41.988268 | orchestrator | Sunday 25 May 2025 03:46:41 +0000 (0:00:00.103) 0:00:18.096 ************ 2025-05-25 03:46:42.092193 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:42.092331 | orchestrator | 2025-05-25 03:46:42.093137 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-25 03:46:42.093818 | orchestrator | Sunday 25 May 2025 03:46:42 +0000 (0:00:00.108) 0:00:18.205 ************ 2025-05-25 03:46:42.226728 | orchestrator | ok: [testbed-node-3] => { 2025-05-25 03:46:42.226887 | orchestrator |  "vgs_report": { 2025-05-25 03:46:42.226904 | orchestrator |  "vg": [] 2025-05-25 03:46:42.227736 | orchestrator |  } 2025-05-25 03:46:42.228246 | orchestrator | } 2025-05-25 03:46:42.229235 | orchestrator | 2025-05-25 03:46:42.229584 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-25 03:46:42.231058 | orchestrator | Sunday 25 May 2025 03:46:42 +0000 (0:00:00.133) 0:00:18.339 ************ 2025-05-25 03:46:42.344083 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:42.344395 | orchestrator | 2025-05-25 03:46:42.346166 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-25 03:46:42.347898 | orchestrator | Sunday 25 May 2025 03:46:42 +0000 (0:00:00.117) 0:00:18.456 ************ 2025-05-25 03:46:42.480998 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:42.485196 | orchestrator | 2025-05-25 03:46:42.488397 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-25 03:46:42.489219 | orchestrator | Sunday 25 May 2025 03:46:42 +0000 (0:00:00.137) 0:00:18.593 ************ 2025-05-25 03:46:42.828875 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:42.829387 | orchestrator | 2025-05-25 03:46:42.829921 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-25 03:46:42.830714 | orchestrator | Sunday 25 May 2025 03:46:42 +0000 (0:00:00.347) 0:00:18.940 ************ 2025-05-25 03:46:42.955693 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:42.956078 | orchestrator | 2025-05-25 03:46:42.957224 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-25 03:46:42.958163 | orchestrator | Sunday 25 May 2025 03:46:42 +0000 (0:00:00.127) 0:00:19.068 ************ 2025-05-25 03:46:43.090363 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:43.090987 | orchestrator | 2025-05-25 03:46:43.091369 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-25 03:46:43.091672 | orchestrator | Sunday 25 May 2025 03:46:43 +0000 (0:00:00.134) 0:00:19.202 ************ 2025-05-25 03:46:43.220306 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:43.220624 | orchestrator | 2025-05-25 03:46:43.221237 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-25 03:46:43.221578 | orchestrator | Sunday 25 May 2025 03:46:43 +0000 (0:00:00.130) 0:00:19.333 ************ 2025-05-25 03:46:43.369692 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:43.369901 | orchestrator | 2025-05-25 03:46:43.369923 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-25 03:46:43.370639 | orchestrator | Sunday 25 May 2025 03:46:43 +0000 (0:00:00.147) 0:00:19.480 ************ 2025-05-25 03:46:43.511691 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:43.513318 | orchestrator | 2025-05-25 03:46:43.514471 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-25 03:46:43.515382 | orchestrator | Sunday 25 May 2025 03:46:43 +0000 (0:00:00.143) 0:00:19.624 ************ 2025-05-25 03:46:43.644363 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:43.645140 | orchestrator | 2025-05-25 03:46:43.645437 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-25 03:46:43.645463 | orchestrator | Sunday 25 May 2025 03:46:43 +0000 (0:00:00.132) 0:00:19.757 ************ 2025-05-25 03:46:43.781041 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:43.781197 | orchestrator | 2025-05-25 03:46:43.781780 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-25 03:46:43.782724 | orchestrator | Sunday 25 May 2025 03:46:43 +0000 (0:00:00.137) 0:00:19.894 ************ 2025-05-25 03:46:43.914966 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:43.915864 | orchestrator | 2025-05-25 03:46:43.916794 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-25 03:46:43.917633 | orchestrator | Sunday 25 May 2025 03:46:43 +0000 (0:00:00.133) 0:00:20.028 ************ 2025-05-25 03:46:44.057634 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:44.058318 | orchestrator | 2025-05-25 03:46:44.059334 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-25 03:46:44.060139 | orchestrator | Sunday 25 May 2025 03:46:44 +0000 (0:00:00.142) 0:00:20.170 ************ 2025-05-25 03:46:44.195041 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:44.196887 | orchestrator | 2025-05-25 03:46:44.197505 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-25 03:46:44.198830 | orchestrator | Sunday 25 May 2025 03:46:44 +0000 (0:00:00.136) 0:00:20.307 ************ 2025-05-25 03:46:44.346249 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:44.347154 | orchestrator | 2025-05-25 03:46:44.349089 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-25 03:46:44.350233 | orchestrator | Sunday 25 May 2025 03:46:44 +0000 (0:00:00.151) 0:00:20.458 ************ 2025-05-25 03:46:44.666179 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:44.667611 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:44.669919 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:44.671017 | orchestrator | 2025-05-25 03:46:44.671822 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-25 03:46:44.672947 | orchestrator | Sunday 25 May 2025 03:46:44 +0000 (0:00:00.318) 0:00:20.776 ************ 2025-05-25 03:46:44.815508 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:44.816460 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:44.817245 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:44.818343 | orchestrator | 2025-05-25 03:46:44.819694 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-25 03:46:44.821015 | orchestrator | Sunday 25 May 2025 03:46:44 +0000 (0:00:00.150) 0:00:20.927 ************ 2025-05-25 03:46:44.953498 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:44.953592 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:44.954750 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:44.955568 | orchestrator | 2025-05-25 03:46:44.956251 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-25 03:46:44.957370 | orchestrator | Sunday 25 May 2025 03:46:44 +0000 (0:00:00.137) 0:00:21.065 ************ 2025-05-25 03:46:45.103176 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:45.103276 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:45.103958 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:45.104066 | orchestrator | 2025-05-25 03:46:45.104718 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-25 03:46:45.105221 | orchestrator | Sunday 25 May 2025 03:46:45 +0000 (0:00:00.150) 0:00:21.216 ************ 2025-05-25 03:46:45.249609 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:45.250139 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:45.250844 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:45.251551 | orchestrator | 2025-05-25 03:46:45.252250 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-25 03:46:45.253614 | orchestrator | Sunday 25 May 2025 03:46:45 +0000 (0:00:00.145) 0:00:21.362 ************ 2025-05-25 03:46:45.415157 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:45.415990 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:45.416946 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:45.418895 | orchestrator | 2025-05-25 03:46:45.418930 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-25 03:46:45.419585 | orchestrator | Sunday 25 May 2025 03:46:45 +0000 (0:00:00.165) 0:00:21.527 ************ 2025-05-25 03:46:45.565903 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:45.567528 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:45.568419 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:45.568922 | orchestrator | 2025-05-25 03:46:45.570659 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-25 03:46:45.571350 | orchestrator | Sunday 25 May 2025 03:46:45 +0000 (0:00:00.151) 0:00:21.678 ************ 2025-05-25 03:46:45.716782 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:45.717400 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:45.718764 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:45.718982 | orchestrator | 2025-05-25 03:46:45.720993 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-25 03:46:45.722398 | orchestrator | Sunday 25 May 2025 03:46:45 +0000 (0:00:00.149) 0:00:21.828 ************ 2025-05-25 03:46:46.223628 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:46:46.223799 | orchestrator | 2025-05-25 03:46:46.224290 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-25 03:46:46.225498 | orchestrator | Sunday 25 May 2025 03:46:46 +0000 (0:00:00.507) 0:00:22.335 ************ 2025-05-25 03:46:46.735431 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:46:46.735597 | orchestrator | 2025-05-25 03:46:46.735930 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-25 03:46:46.736626 | orchestrator | Sunday 25 May 2025 03:46:46 +0000 (0:00:00.510) 0:00:22.846 ************ 2025-05-25 03:46:46.876128 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:46:46.876853 | orchestrator | 2025-05-25 03:46:46.877620 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-25 03:46:46.878606 | orchestrator | Sunday 25 May 2025 03:46:46 +0000 (0:00:00.141) 0:00:22.988 ************ 2025-05-25 03:46:47.039860 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'vg_name': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'}) 2025-05-25 03:46:47.040383 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'vg_name': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'}) 2025-05-25 03:46:47.041515 | orchestrator | 2025-05-25 03:46:47.042660 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-25 03:46:47.045600 | orchestrator | Sunday 25 May 2025 03:46:47 +0000 (0:00:00.164) 0:00:23.152 ************ 2025-05-25 03:46:47.375021 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:47.375187 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:47.376234 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:47.377126 | orchestrator | 2025-05-25 03:46:47.377693 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-25 03:46:47.380530 | orchestrator | Sunday 25 May 2025 03:46:47 +0000 (0:00:00.334) 0:00:23.486 ************ 2025-05-25 03:46:47.525917 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:47.527529 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:47.528290 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:47.529276 | orchestrator | 2025-05-25 03:46:47.530844 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-25 03:46:47.530865 | orchestrator | Sunday 25 May 2025 03:46:47 +0000 (0:00:00.151) 0:00:23.638 ************ 2025-05-25 03:46:47.673401 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'})  2025-05-25 03:46:47.673628 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'})  2025-05-25 03:46:47.674500 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:46:47.674971 | orchestrator | 2025-05-25 03:46:47.675332 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-25 03:46:47.676164 | orchestrator | Sunday 25 May 2025 03:46:47 +0000 (0:00:00.147) 0:00:23.786 ************ 2025-05-25 03:46:47.977622 | orchestrator | ok: [testbed-node-3] => { 2025-05-25 03:46:47.978083 | orchestrator |  "lvm_report": { 2025-05-25 03:46:47.978668 | orchestrator |  "lv": [ 2025-05-25 03:46:47.980038 | orchestrator |  { 2025-05-25 03:46:47.980431 | orchestrator |  "lv_name": "osd-block-02f362e7-7983-50b5-b688-a41104a01860", 2025-05-25 03:46:47.981761 | orchestrator |  "vg_name": "ceph-02f362e7-7983-50b5-b688-a41104a01860" 2025-05-25 03:46:47.981823 | orchestrator |  }, 2025-05-25 03:46:47.982727 | orchestrator |  { 2025-05-25 03:46:47.983217 | orchestrator |  "lv_name": "osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7", 2025-05-25 03:46:47.983514 | orchestrator |  "vg_name": "ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7" 2025-05-25 03:46:47.984017 | orchestrator |  } 2025-05-25 03:46:47.984891 | orchestrator |  ], 2025-05-25 03:46:47.985151 | orchestrator |  "pv": [ 2025-05-25 03:46:47.985881 | orchestrator |  { 2025-05-25 03:46:47.986214 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-25 03:46:47.986919 | orchestrator |  "vg_name": "ceph-02f362e7-7983-50b5-b688-a41104a01860" 2025-05-25 03:46:47.987351 | orchestrator |  }, 2025-05-25 03:46:47.987880 | orchestrator |  { 2025-05-25 03:46:47.988500 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-25 03:46:47.988733 | orchestrator |  "vg_name": "ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7" 2025-05-25 03:46:47.989245 | orchestrator |  } 2025-05-25 03:46:47.989604 | orchestrator |  ] 2025-05-25 03:46:47.990077 | orchestrator |  } 2025-05-25 03:46:47.990821 | orchestrator | } 2025-05-25 03:46:47.992025 | orchestrator | 2025-05-25 03:46:47.992486 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-25 03:46:47.993165 | orchestrator | 2025-05-25 03:46:47.993549 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-25 03:46:47.994178 | orchestrator | Sunday 25 May 2025 03:46:47 +0000 (0:00:00.302) 0:00:24.089 ************ 2025-05-25 03:46:48.237375 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-25 03:46:48.237900 | orchestrator | 2025-05-25 03:46:48.238777 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-25 03:46:48.239463 | orchestrator | Sunday 25 May 2025 03:46:48 +0000 (0:00:00.260) 0:00:24.350 ************ 2025-05-25 03:46:48.482272 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:46:48.483391 | orchestrator | 2025-05-25 03:46:48.483977 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:48.485025 | orchestrator | Sunday 25 May 2025 03:46:48 +0000 (0:00:00.244) 0:00:24.594 ************ 2025-05-25 03:46:48.883195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-25 03:46:48.884268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-25 03:46:48.885451 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-25 03:46:48.886992 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-25 03:46:48.887856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-25 03:46:48.888560 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-25 03:46:48.889288 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-25 03:46:48.889793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-25 03:46:48.890364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-25 03:46:48.891090 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-25 03:46:48.891607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-25 03:46:48.892080 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-25 03:46:48.892545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-25 03:46:48.893022 | orchestrator | 2025-05-25 03:46:48.893568 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:48.894230 | orchestrator | Sunday 25 May 2025 03:46:48 +0000 (0:00:00.399) 0:00:24.994 ************ 2025-05-25 03:46:49.083335 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:49.083456 | orchestrator | 2025-05-25 03:46:49.084015 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:49.084382 | orchestrator | Sunday 25 May 2025 03:46:49 +0000 (0:00:00.202) 0:00:25.196 ************ 2025-05-25 03:46:49.260584 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:49.261130 | orchestrator | 2025-05-25 03:46:49.261914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:49.263404 | orchestrator | Sunday 25 May 2025 03:46:49 +0000 (0:00:00.176) 0:00:25.372 ************ 2025-05-25 03:46:49.820620 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:49.821582 | orchestrator | 2025-05-25 03:46:49.823841 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:49.823900 | orchestrator | Sunday 25 May 2025 03:46:49 +0000 (0:00:00.559) 0:00:25.932 ************ 2025-05-25 03:46:50.017492 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:50.018004 | orchestrator | 2025-05-25 03:46:50.018898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:50.019849 | orchestrator | Sunday 25 May 2025 03:46:50 +0000 (0:00:00.197) 0:00:26.130 ************ 2025-05-25 03:46:50.203581 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:50.206073 | orchestrator | 2025-05-25 03:46:50.206175 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:50.206195 | orchestrator | Sunday 25 May 2025 03:46:50 +0000 (0:00:00.185) 0:00:26.316 ************ 2025-05-25 03:46:50.396974 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:50.397573 | orchestrator | 2025-05-25 03:46:50.398271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:50.401461 | orchestrator | Sunday 25 May 2025 03:46:50 +0000 (0:00:00.193) 0:00:26.509 ************ 2025-05-25 03:46:50.588474 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:50.589011 | orchestrator | 2025-05-25 03:46:50.590133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:50.590747 | orchestrator | Sunday 25 May 2025 03:46:50 +0000 (0:00:00.190) 0:00:26.699 ************ 2025-05-25 03:46:50.796187 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:50.796371 | orchestrator | 2025-05-25 03:46:50.796492 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:50.796844 | orchestrator | Sunday 25 May 2025 03:46:50 +0000 (0:00:00.209) 0:00:26.908 ************ 2025-05-25 03:46:51.203970 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83) 2025-05-25 03:46:51.205223 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83) 2025-05-25 03:46:51.206083 | orchestrator | 2025-05-25 03:46:51.208158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:51.209946 | orchestrator | Sunday 25 May 2025 03:46:51 +0000 (0:00:00.403) 0:00:27.312 ************ 2025-05-25 03:46:51.602345 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_17d1c6f1-1305-4025-b6c8-ee1be555c001) 2025-05-25 03:46:51.602737 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_17d1c6f1-1305-4025-b6c8-ee1be555c001) 2025-05-25 03:46:51.604391 | orchestrator | 2025-05-25 03:46:51.605178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:51.606148 | orchestrator | Sunday 25 May 2025 03:46:51 +0000 (0:00:00.403) 0:00:27.716 ************ 2025-05-25 03:46:52.011768 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b0e50223-c4d0-48f7-a5f8-d1963b067c82) 2025-05-25 03:46:52.012001 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b0e50223-c4d0-48f7-a5f8-d1963b067c82) 2025-05-25 03:46:52.012940 | orchestrator | 2025-05-25 03:46:52.014194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:52.015070 | orchestrator | Sunday 25 May 2025 03:46:52 +0000 (0:00:00.406) 0:00:28.122 ************ 2025-05-25 03:46:52.447470 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_38e86a76-d592-4447-9c79-2151d2192c3f) 2025-05-25 03:46:52.448047 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_38e86a76-d592-4447-9c79-2151d2192c3f) 2025-05-25 03:46:52.449210 | orchestrator | 2025-05-25 03:46:52.450245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:46:52.450315 | orchestrator | Sunday 25 May 2025 03:46:52 +0000 (0:00:00.436) 0:00:28.559 ************ 2025-05-25 03:46:52.768224 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-25 03:46:52.768586 | orchestrator | 2025-05-25 03:46:52.769421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:52.770462 | orchestrator | Sunday 25 May 2025 03:46:52 +0000 (0:00:00.321) 0:00:28.881 ************ 2025-05-25 03:46:53.351030 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-25 03:46:53.353075 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-25 03:46:53.354345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-25 03:46:53.354592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-25 03:46:53.355274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-25 03:46:53.355801 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-25 03:46:53.356263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-25 03:46:53.356753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-25 03:46:53.357262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-25 03:46:53.357771 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-25 03:46:53.358281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-25 03:46:53.358996 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-25 03:46:53.359304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-25 03:46:53.359903 | orchestrator | 2025-05-25 03:46:53.360309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:53.360792 | orchestrator | Sunday 25 May 2025 03:46:53 +0000 (0:00:00.581) 0:00:29.462 ************ 2025-05-25 03:46:53.546090 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:53.546816 | orchestrator | 2025-05-25 03:46:53.547917 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:53.549813 | orchestrator | Sunday 25 May 2025 03:46:53 +0000 (0:00:00.195) 0:00:29.658 ************ 2025-05-25 03:46:53.756063 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:53.756227 | orchestrator | 2025-05-25 03:46:53.756339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:53.758451 | orchestrator | Sunday 25 May 2025 03:46:53 +0000 (0:00:00.210) 0:00:29.869 ************ 2025-05-25 03:46:53.955077 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:53.956199 | orchestrator | 2025-05-25 03:46:53.957640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:53.958569 | orchestrator | Sunday 25 May 2025 03:46:53 +0000 (0:00:00.197) 0:00:30.066 ************ 2025-05-25 03:46:54.147149 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:54.147379 | orchestrator | 2025-05-25 03:46:54.148257 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:54.149065 | orchestrator | Sunday 25 May 2025 03:46:54 +0000 (0:00:00.193) 0:00:30.259 ************ 2025-05-25 03:46:54.343967 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:54.344495 | orchestrator | 2025-05-25 03:46:54.346921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:54.347816 | orchestrator | Sunday 25 May 2025 03:46:54 +0000 (0:00:00.196) 0:00:30.456 ************ 2025-05-25 03:46:54.532976 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:54.534190 | orchestrator | 2025-05-25 03:46:54.535255 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:54.536226 | orchestrator | Sunday 25 May 2025 03:46:54 +0000 (0:00:00.189) 0:00:30.645 ************ 2025-05-25 03:46:54.719023 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:54.719705 | orchestrator | 2025-05-25 03:46:54.720656 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:54.721717 | orchestrator | Sunday 25 May 2025 03:46:54 +0000 (0:00:00.186) 0:00:30.831 ************ 2025-05-25 03:46:54.913627 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:54.914238 | orchestrator | 2025-05-25 03:46:54.914524 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:54.916473 | orchestrator | Sunday 25 May 2025 03:46:54 +0000 (0:00:00.193) 0:00:31.025 ************ 2025-05-25 03:46:55.752491 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-25 03:46:55.752618 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-25 03:46:55.753683 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-25 03:46:55.753924 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-25 03:46:55.755732 | orchestrator | 2025-05-25 03:46:55.757397 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:55.757910 | orchestrator | Sunday 25 May 2025 03:46:55 +0000 (0:00:00.838) 0:00:31.863 ************ 2025-05-25 03:46:55.948992 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:55.950154 | orchestrator | 2025-05-25 03:46:55.950452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:55.951227 | orchestrator | Sunday 25 May 2025 03:46:55 +0000 (0:00:00.196) 0:00:32.060 ************ 2025-05-25 03:46:56.137198 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:56.137608 | orchestrator | 2025-05-25 03:46:56.138823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:56.139073 | orchestrator | Sunday 25 May 2025 03:46:56 +0000 (0:00:00.189) 0:00:32.249 ************ 2025-05-25 03:46:56.714272 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:56.714705 | orchestrator | 2025-05-25 03:46:56.715824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:46:56.716309 | orchestrator | Sunday 25 May 2025 03:46:56 +0000 (0:00:00.577) 0:00:32.827 ************ 2025-05-25 03:46:56.917091 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:56.917687 | orchestrator | 2025-05-25 03:46:56.918269 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-25 03:46:56.919092 | orchestrator | Sunday 25 May 2025 03:46:56 +0000 (0:00:00.202) 0:00:33.029 ************ 2025-05-25 03:46:57.054690 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:57.055315 | orchestrator | 2025-05-25 03:46:57.055791 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-25 03:46:57.056628 | orchestrator | Sunday 25 May 2025 03:46:57 +0000 (0:00:00.137) 0:00:33.167 ************ 2025-05-25 03:46:57.239237 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'}}) 2025-05-25 03:46:57.240417 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '733a1394-dd45-5d63-8d82-63858202edf3'}}) 2025-05-25 03:46:57.241055 | orchestrator | 2025-05-25 03:46:57.241776 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-25 03:46:57.243156 | orchestrator | Sunday 25 May 2025 03:46:57 +0000 (0:00:00.184) 0:00:33.352 ************ 2025-05-25 03:46:59.101882 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'}) 2025-05-25 03:46:59.102790 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'}) 2025-05-25 03:46:59.103615 | orchestrator | 2025-05-25 03:46:59.105231 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-25 03:46:59.106241 | orchestrator | Sunday 25 May 2025 03:46:59 +0000 (0:00:01.859) 0:00:35.211 ************ 2025-05-25 03:46:59.251881 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:46:59.253931 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:46:59.255320 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:46:59.256783 | orchestrator | 2025-05-25 03:46:59.259091 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-25 03:46:59.259165 | orchestrator | Sunday 25 May 2025 03:46:59 +0000 (0:00:00.152) 0:00:35.364 ************ 2025-05-25 03:47:00.541140 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'}) 2025-05-25 03:47:00.541838 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'}) 2025-05-25 03:47:00.542472 | orchestrator | 2025-05-25 03:47:00.543005 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-25 03:47:00.543561 | orchestrator | Sunday 25 May 2025 03:47:00 +0000 (0:00:01.288) 0:00:36.652 ************ 2025-05-25 03:47:00.684498 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:00.684819 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:00.686279 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:00.687257 | orchestrator | 2025-05-25 03:47:00.688875 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-25 03:47:00.688897 | orchestrator | Sunday 25 May 2025 03:47:00 +0000 (0:00:00.144) 0:00:36.797 ************ 2025-05-25 03:47:00.827498 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:00.827914 | orchestrator | 2025-05-25 03:47:00.830343 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-25 03:47:00.830434 | orchestrator | Sunday 25 May 2025 03:47:00 +0000 (0:00:00.141) 0:00:36.938 ************ 2025-05-25 03:47:00.978304 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:00.978386 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:00.978822 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:00.978905 | orchestrator | 2025-05-25 03:47:00.979225 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-25 03:47:00.979990 | orchestrator | Sunday 25 May 2025 03:47:00 +0000 (0:00:00.149) 0:00:37.088 ************ 2025-05-25 03:47:01.107666 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:01.107928 | orchestrator | 2025-05-25 03:47:01.109411 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-25 03:47:01.110759 | orchestrator | Sunday 25 May 2025 03:47:01 +0000 (0:00:00.132) 0:00:37.220 ************ 2025-05-25 03:47:01.255999 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:01.256326 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:01.257484 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:01.257786 | orchestrator | 2025-05-25 03:47:01.259329 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-25 03:47:01.259392 | orchestrator | Sunday 25 May 2025 03:47:01 +0000 (0:00:00.147) 0:00:37.368 ************ 2025-05-25 03:47:01.574580 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:01.574768 | orchestrator | 2025-05-25 03:47:01.576158 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-25 03:47:01.576667 | orchestrator | Sunday 25 May 2025 03:47:01 +0000 (0:00:00.318) 0:00:37.686 ************ 2025-05-25 03:47:01.723363 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:01.724868 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:01.725519 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:01.727228 | orchestrator | 2025-05-25 03:47:01.727256 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-25 03:47:01.728226 | orchestrator | Sunday 25 May 2025 03:47:01 +0000 (0:00:00.149) 0:00:37.835 ************ 2025-05-25 03:47:01.872652 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:47:01.873133 | orchestrator | 2025-05-25 03:47:01.874069 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-25 03:47:01.875996 | orchestrator | Sunday 25 May 2025 03:47:01 +0000 (0:00:00.149) 0:00:37.984 ************ 2025-05-25 03:47:02.024849 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:02.025323 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:02.026156 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:02.026945 | orchestrator | 2025-05-25 03:47:02.028912 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-25 03:47:02.028951 | orchestrator | Sunday 25 May 2025 03:47:02 +0000 (0:00:00.152) 0:00:38.137 ************ 2025-05-25 03:47:02.190501 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:02.190827 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:02.191960 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:02.192772 | orchestrator | 2025-05-25 03:47:02.194287 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-25 03:47:02.195232 | orchestrator | Sunday 25 May 2025 03:47:02 +0000 (0:00:00.165) 0:00:38.302 ************ 2025-05-25 03:47:02.349379 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:02.350392 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:02.353233 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:02.353254 | orchestrator | 2025-05-25 03:47:02.354366 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-25 03:47:02.355324 | orchestrator | Sunday 25 May 2025 03:47:02 +0000 (0:00:00.159) 0:00:38.461 ************ 2025-05-25 03:47:02.488256 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:02.489056 | orchestrator | 2025-05-25 03:47:02.491069 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-25 03:47:02.492561 | orchestrator | Sunday 25 May 2025 03:47:02 +0000 (0:00:00.139) 0:00:38.601 ************ 2025-05-25 03:47:02.630797 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:02.632035 | orchestrator | 2025-05-25 03:47:02.632977 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-25 03:47:02.634312 | orchestrator | Sunday 25 May 2025 03:47:02 +0000 (0:00:00.141) 0:00:38.743 ************ 2025-05-25 03:47:02.760811 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:02.761895 | orchestrator | 2025-05-25 03:47:02.763080 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-25 03:47:02.764041 | orchestrator | Sunday 25 May 2025 03:47:02 +0000 (0:00:00.130) 0:00:38.873 ************ 2025-05-25 03:47:02.897407 | orchestrator | ok: [testbed-node-4] => { 2025-05-25 03:47:02.898647 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-25 03:47:02.899683 | orchestrator | } 2025-05-25 03:47:02.900725 | orchestrator | 2025-05-25 03:47:02.901477 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-25 03:47:02.902666 | orchestrator | Sunday 25 May 2025 03:47:02 +0000 (0:00:00.136) 0:00:39.009 ************ 2025-05-25 03:47:03.038324 | orchestrator | ok: [testbed-node-4] => { 2025-05-25 03:47:03.040531 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-25 03:47:03.040928 | orchestrator | } 2025-05-25 03:47:03.042357 | orchestrator | 2025-05-25 03:47:03.043483 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-25 03:47:03.044191 | orchestrator | Sunday 25 May 2025 03:47:03 +0000 (0:00:00.140) 0:00:39.150 ************ 2025-05-25 03:47:03.177898 | orchestrator | ok: [testbed-node-4] => { 2025-05-25 03:47:03.182364 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-25 03:47:03.183453 | orchestrator | } 2025-05-25 03:47:03.186695 | orchestrator | 2025-05-25 03:47:03.187883 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-25 03:47:03.188753 | orchestrator | Sunday 25 May 2025 03:47:03 +0000 (0:00:00.139) 0:00:39.290 ************ 2025-05-25 03:47:03.887973 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:47:03.888205 | orchestrator | 2025-05-25 03:47:03.889380 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-25 03:47:03.891090 | orchestrator | Sunday 25 May 2025 03:47:03 +0000 (0:00:00.710) 0:00:40.000 ************ 2025-05-25 03:47:04.390508 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:47:04.390794 | orchestrator | 2025-05-25 03:47:04.391853 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-25 03:47:04.392351 | orchestrator | Sunday 25 May 2025 03:47:04 +0000 (0:00:00.500) 0:00:40.501 ************ 2025-05-25 03:47:04.922787 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:47:04.922889 | orchestrator | 2025-05-25 03:47:04.922970 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-25 03:47:04.923313 | orchestrator | Sunday 25 May 2025 03:47:04 +0000 (0:00:00.530) 0:00:41.031 ************ 2025-05-25 03:47:05.073918 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:47:05.073995 | orchestrator | 2025-05-25 03:47:05.075040 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-25 03:47:05.075499 | orchestrator | Sunday 25 May 2025 03:47:05 +0000 (0:00:00.155) 0:00:41.186 ************ 2025-05-25 03:47:05.191251 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:05.191911 | orchestrator | 2025-05-25 03:47:05.192885 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-25 03:47:05.193406 | orchestrator | Sunday 25 May 2025 03:47:05 +0000 (0:00:00.116) 0:00:41.303 ************ 2025-05-25 03:47:05.303802 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:05.304862 | orchestrator | 2025-05-25 03:47:05.306280 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-25 03:47:05.306942 | orchestrator | Sunday 25 May 2025 03:47:05 +0000 (0:00:00.111) 0:00:41.414 ************ 2025-05-25 03:47:05.448302 | orchestrator | ok: [testbed-node-4] => { 2025-05-25 03:47:05.449744 | orchestrator |  "vgs_report": { 2025-05-25 03:47:05.450192 | orchestrator |  "vg": [] 2025-05-25 03:47:05.451266 | orchestrator |  } 2025-05-25 03:47:05.451863 | orchestrator | } 2025-05-25 03:47:05.452826 | orchestrator | 2025-05-25 03:47:05.453597 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-25 03:47:05.453815 | orchestrator | Sunday 25 May 2025 03:47:05 +0000 (0:00:00.145) 0:00:41.560 ************ 2025-05-25 03:47:05.575176 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:05.575512 | orchestrator | 2025-05-25 03:47:05.576515 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-25 03:47:05.577364 | orchestrator | Sunday 25 May 2025 03:47:05 +0000 (0:00:00.127) 0:00:41.688 ************ 2025-05-25 03:47:05.710831 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:05.712901 | orchestrator | 2025-05-25 03:47:05.712926 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-25 03:47:05.713161 | orchestrator | Sunday 25 May 2025 03:47:05 +0000 (0:00:00.134) 0:00:41.822 ************ 2025-05-25 03:47:05.844187 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:05.844740 | orchestrator | 2025-05-25 03:47:05.846447 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-25 03:47:05.847505 | orchestrator | Sunday 25 May 2025 03:47:05 +0000 (0:00:00.133) 0:00:41.956 ************ 2025-05-25 03:47:05.972430 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:05.973052 | orchestrator | 2025-05-25 03:47:05.973896 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-25 03:47:05.974751 | orchestrator | Sunday 25 May 2025 03:47:05 +0000 (0:00:00.128) 0:00:42.084 ************ 2025-05-25 03:47:06.105783 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:06.106623 | orchestrator | 2025-05-25 03:47:06.107432 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-25 03:47:06.108159 | orchestrator | Sunday 25 May 2025 03:47:06 +0000 (0:00:00.133) 0:00:42.218 ************ 2025-05-25 03:47:06.440061 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:06.440274 | orchestrator | 2025-05-25 03:47:06.441817 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-25 03:47:06.443077 | orchestrator | Sunday 25 May 2025 03:47:06 +0000 (0:00:00.331) 0:00:42.550 ************ 2025-05-25 03:47:06.581245 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:06.581522 | orchestrator | 2025-05-25 03:47:06.582534 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-25 03:47:06.583213 | orchestrator | Sunday 25 May 2025 03:47:06 +0000 (0:00:00.141) 0:00:42.691 ************ 2025-05-25 03:47:06.713860 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:06.714735 | orchestrator | 2025-05-25 03:47:06.715031 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-25 03:47:06.715915 | orchestrator | Sunday 25 May 2025 03:47:06 +0000 (0:00:00.135) 0:00:42.827 ************ 2025-05-25 03:47:06.843733 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:06.844710 | orchestrator | 2025-05-25 03:47:06.845576 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-25 03:47:06.846237 | orchestrator | Sunday 25 May 2025 03:47:06 +0000 (0:00:00.129) 0:00:42.956 ************ 2025-05-25 03:47:06.966990 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:06.967621 | orchestrator | 2025-05-25 03:47:06.968715 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-25 03:47:06.971550 | orchestrator | Sunday 25 May 2025 03:47:06 +0000 (0:00:00.123) 0:00:43.080 ************ 2025-05-25 03:47:07.100146 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:07.100915 | orchestrator | 2025-05-25 03:47:07.102219 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-25 03:47:07.103948 | orchestrator | Sunday 25 May 2025 03:47:07 +0000 (0:00:00.132) 0:00:43.212 ************ 2025-05-25 03:47:07.239951 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:07.240029 | orchestrator | 2025-05-25 03:47:07.243002 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-25 03:47:07.243027 | orchestrator | Sunday 25 May 2025 03:47:07 +0000 (0:00:00.138) 0:00:43.351 ************ 2025-05-25 03:47:07.361883 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:07.362427 | orchestrator | 2025-05-25 03:47:07.363542 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-25 03:47:07.364709 | orchestrator | Sunday 25 May 2025 03:47:07 +0000 (0:00:00.121) 0:00:43.472 ************ 2025-05-25 03:47:07.498241 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:07.499385 | orchestrator | 2025-05-25 03:47:07.499725 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-25 03:47:07.500272 | orchestrator | Sunday 25 May 2025 03:47:07 +0000 (0:00:00.134) 0:00:43.607 ************ 2025-05-25 03:47:07.640146 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:07.641996 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:07.643809 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:07.645301 | orchestrator | 2025-05-25 03:47:07.645757 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-25 03:47:07.646740 | orchestrator | Sunday 25 May 2025 03:47:07 +0000 (0:00:00.143) 0:00:43.751 ************ 2025-05-25 03:47:07.788076 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:07.788499 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:07.789624 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:07.790125 | orchestrator | 2025-05-25 03:47:07.791355 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-25 03:47:07.792073 | orchestrator | Sunday 25 May 2025 03:47:07 +0000 (0:00:00.148) 0:00:43.899 ************ 2025-05-25 03:47:07.936004 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:07.937439 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:07.937816 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:07.938783 | orchestrator | 2025-05-25 03:47:07.939643 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-25 03:47:07.940716 | orchestrator | Sunday 25 May 2025 03:47:07 +0000 (0:00:00.148) 0:00:44.048 ************ 2025-05-25 03:47:08.286583 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:08.288261 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:08.290115 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:08.290755 | orchestrator | 2025-05-25 03:47:08.292239 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-25 03:47:08.293158 | orchestrator | Sunday 25 May 2025 03:47:08 +0000 (0:00:00.349) 0:00:44.398 ************ 2025-05-25 03:47:08.448644 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:08.449778 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:08.451876 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:08.451966 | orchestrator | 2025-05-25 03:47:08.452844 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-25 03:47:08.453213 | orchestrator | Sunday 25 May 2025 03:47:08 +0000 (0:00:00.161) 0:00:44.560 ************ 2025-05-25 03:47:08.590506 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:08.590693 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:08.592137 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:08.594490 | orchestrator | 2025-05-25 03:47:08.596058 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-25 03:47:08.596954 | orchestrator | Sunday 25 May 2025 03:47:08 +0000 (0:00:00.142) 0:00:44.702 ************ 2025-05-25 03:47:08.747726 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:08.749869 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:08.752168 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:08.753044 | orchestrator | 2025-05-25 03:47:08.753224 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-25 03:47:08.754200 | orchestrator | Sunday 25 May 2025 03:47:08 +0000 (0:00:00.156) 0:00:44.859 ************ 2025-05-25 03:47:08.909800 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:08.909955 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:08.910242 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:08.910556 | orchestrator | 2025-05-25 03:47:08.910919 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-25 03:47:08.911392 | orchestrator | Sunday 25 May 2025 03:47:08 +0000 (0:00:00.163) 0:00:45.023 ************ 2025-05-25 03:47:09.424503 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:47:09.425288 | orchestrator | 2025-05-25 03:47:09.425402 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-25 03:47:09.427200 | orchestrator | Sunday 25 May 2025 03:47:09 +0000 (0:00:00.506) 0:00:45.529 ************ 2025-05-25 03:47:09.918787 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:47:09.918954 | orchestrator | 2025-05-25 03:47:09.919712 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-25 03:47:09.920683 | orchestrator | Sunday 25 May 2025 03:47:09 +0000 (0:00:00.501) 0:00:46.031 ************ 2025-05-25 03:47:10.059844 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:47:10.060000 | orchestrator | 2025-05-25 03:47:10.060767 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-25 03:47:10.061235 | orchestrator | Sunday 25 May 2025 03:47:10 +0000 (0:00:00.140) 0:00:46.172 ************ 2025-05-25 03:47:10.225054 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'vg_name': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'}) 2025-05-25 03:47:10.226072 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'vg_name': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'}) 2025-05-25 03:47:10.227381 | orchestrator | 2025-05-25 03:47:10.228321 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-25 03:47:10.229352 | orchestrator | Sunday 25 May 2025 03:47:10 +0000 (0:00:00.164) 0:00:46.337 ************ 2025-05-25 03:47:10.386739 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:10.386837 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:10.387527 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:10.388469 | orchestrator | 2025-05-25 03:47:10.390119 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-25 03:47:10.393061 | orchestrator | Sunday 25 May 2025 03:47:10 +0000 (0:00:00.159) 0:00:46.496 ************ 2025-05-25 03:47:10.537749 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:10.539337 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:10.544879 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:10.544936 | orchestrator | 2025-05-25 03:47:10.546581 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-25 03:47:10.547405 | orchestrator | Sunday 25 May 2025 03:47:10 +0000 (0:00:00.153) 0:00:46.650 ************ 2025-05-25 03:47:10.691554 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'})  2025-05-25 03:47:10.693524 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'})  2025-05-25 03:47:10.695434 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:10.695447 | orchestrator | 2025-05-25 03:47:10.696467 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-25 03:47:10.697197 | orchestrator | Sunday 25 May 2025 03:47:10 +0000 (0:00:00.153) 0:00:46.803 ************ 2025-05-25 03:47:11.179137 | orchestrator | ok: [testbed-node-4] => { 2025-05-25 03:47:11.180628 | orchestrator |  "lvm_report": { 2025-05-25 03:47:11.181852 | orchestrator |  "lv": [ 2025-05-25 03:47:11.182689 | orchestrator |  { 2025-05-25 03:47:11.183661 | orchestrator |  "lv_name": "osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0", 2025-05-25 03:47:11.184038 | orchestrator |  "vg_name": "ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0" 2025-05-25 03:47:11.185027 | orchestrator |  }, 2025-05-25 03:47:11.185303 | orchestrator |  { 2025-05-25 03:47:11.185726 | orchestrator |  "lv_name": "osd-block-733a1394-dd45-5d63-8d82-63858202edf3", 2025-05-25 03:47:11.186858 | orchestrator |  "vg_name": "ceph-733a1394-dd45-5d63-8d82-63858202edf3" 2025-05-25 03:47:11.187293 | orchestrator |  } 2025-05-25 03:47:11.187815 | orchestrator |  ], 2025-05-25 03:47:11.188299 | orchestrator |  "pv": [ 2025-05-25 03:47:11.189363 | orchestrator |  { 2025-05-25 03:47:11.190087 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-25 03:47:11.190651 | orchestrator |  "vg_name": "ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0" 2025-05-25 03:47:11.190853 | orchestrator |  }, 2025-05-25 03:47:11.192197 | orchestrator |  { 2025-05-25 03:47:11.192428 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-25 03:47:11.193343 | orchestrator |  "vg_name": "ceph-733a1394-dd45-5d63-8d82-63858202edf3" 2025-05-25 03:47:11.193884 | orchestrator |  } 2025-05-25 03:47:11.194008 | orchestrator |  ] 2025-05-25 03:47:11.194903 | orchestrator |  } 2025-05-25 03:47:11.195194 | orchestrator | } 2025-05-25 03:47:11.195527 | orchestrator | 2025-05-25 03:47:11.196528 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-25 03:47:11.196781 | orchestrator | 2025-05-25 03:47:11.197762 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-25 03:47:11.198395 | orchestrator | Sunday 25 May 2025 03:47:11 +0000 (0:00:00.487) 0:00:47.291 ************ 2025-05-25 03:47:11.439570 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-25 03:47:11.440248 | orchestrator | 2025-05-25 03:47:11.440748 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-25 03:47:11.441234 | orchestrator | Sunday 25 May 2025 03:47:11 +0000 (0:00:00.256) 0:00:47.548 ************ 2025-05-25 03:47:11.654232 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:47:11.654828 | orchestrator | 2025-05-25 03:47:11.655785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:47:11.656692 | orchestrator | Sunday 25 May 2025 03:47:11 +0000 (0:00:00.218) 0:00:47.767 ************ 2025-05-25 03:47:12.058742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-25 03:47:12.059991 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-25 03:47:12.061049 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-25 03:47:12.061415 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-25 03:47:12.062679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-25 03:47:12.063991 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-25 03:47:12.064688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-25 03:47:12.066851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-25 03:47:12.066875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-25 03:47:12.068529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-25 03:47:12.070379 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-25 03:47:12.071272 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-25 03:47:12.075898 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-25 03:47:12.075922 | orchestrator | 2025-05-25 03:47:12.077294 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:47:12.077936 | orchestrator | Sunday 25 May 2025 03:47:12 +0000 (0:00:00.404) 0:00:48.171 ************ 2025-05-25 03:47:12.230599 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:12.230765 | orchestrator | 2025-05-25 03:47:12.231851 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:47:12.233134 | orchestrator | Sunday 25 May 2025 03:47:12 +0000 (0:00:00.171) 0:00:48.342 ************ 2025-05-25 03:47:12.427439 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:12.427952 | orchestrator | 2025-05-25 03:47:12.428729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:47:12.429834 | orchestrator | Sunday 25 May 2025 03:47:12 +0000 (0:00:00.196) 0:00:48.539 ************ 2025-05-25 03:47:12.638685 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:12.639511 | orchestrator | 2025-05-25 03:47:12.640341 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:47:12.641253 | orchestrator | Sunday 25 May 2025 03:47:12 +0000 (0:00:00.210) 0:00:48.749 ************ 2025-05-25 03:47:12.839385 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:12.839609 | orchestrator | 2025-05-25 03:47:12.840545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:47:12.841897 | orchestrator | Sunday 25 May 2025 03:47:12 +0000 (0:00:00.201) 0:00:48.951 ************ 2025-05-25 03:47:13.038262 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:13.039290 | orchestrator | 2025-05-25 03:47:13.040442 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:47:13.041284 | orchestrator | Sunday 25 May 2025 03:47:13 +0000 (0:00:00.199) 0:00:49.150 ************ 2025-05-25 03:47:13.646326 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:13.646480 | orchestrator | 2025-05-25 03:47:13.648196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:47:13.649462 | orchestrator | Sunday 25 May 2025 03:47:13 +0000 (0:00:00.606) 0:00:49.756 ************ 2025-05-25 03:47:13.839438 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:13.840173 | orchestrator | 2025-05-25 03:47:13.840777 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:47:13.841942 | orchestrator | Sunday 25 May 2025 03:47:13 +0000 (0:00:00.195) 0:00:49.951 ************ 2025-05-25 03:47:14.037656 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:14.038208 | orchestrator | 2025-05-25 03:47:14.039154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:47:14.040057 | orchestrator | Sunday 25 May 2025 03:47:14 +0000 (0:00:00.198) 0:00:50.150 ************ 2025-05-25 03:47:14.443727 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0) 2025-05-25 03:47:14.444704 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0) 2025-05-25 03:47:14.445676 | orchestrator | 2025-05-25 03:47:14.446352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:47:14.447509 | orchestrator | Sunday 25 May 2025 03:47:14 +0000 (0:00:00.404) 0:00:50.555 ************ 2025-05-25 03:47:14.856388 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_201f277c-fdb2-416e-b305-0d8ba90b32cd) 2025-05-25 03:47:14.856634 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_201f277c-fdb2-416e-b305-0d8ba90b32cd) 2025-05-25 03:47:14.857416 | orchestrator | 2025-05-25 03:47:14.858349 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:47:14.859465 | orchestrator | Sunday 25 May 2025 03:47:14 +0000 (0:00:00.413) 0:00:50.968 ************ 2025-05-25 03:47:15.262600 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8968a7f7-851b-405b-80f4-de48ab1dffee) 2025-05-25 03:47:15.263028 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8968a7f7-851b-405b-80f4-de48ab1dffee) 2025-05-25 03:47:15.263897 | orchestrator | 2025-05-25 03:47:15.264445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:47:15.265391 | orchestrator | Sunday 25 May 2025 03:47:15 +0000 (0:00:00.406) 0:00:51.375 ************ 2025-05-25 03:47:15.714168 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_603d0154-8a06-450e-a743-756d85b1bc6a) 2025-05-25 03:47:15.714345 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_603d0154-8a06-450e-a743-756d85b1bc6a) 2025-05-25 03:47:15.715212 | orchestrator | 2025-05-25 03:47:15.716689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 03:47:15.716717 | orchestrator | Sunday 25 May 2025 03:47:15 +0000 (0:00:00.450) 0:00:51.825 ************ 2025-05-25 03:47:16.073910 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-25 03:47:16.080207 | orchestrator | 2025-05-25 03:47:16.080294 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:47:16.080311 | orchestrator | Sunday 25 May 2025 03:47:16 +0000 (0:00:00.360) 0:00:52.186 ************ 2025-05-25 03:47:16.482639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-25 03:47:16.483505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-25 03:47:16.484487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-25 03:47:16.486531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-25 03:47:16.486904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-25 03:47:16.487557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-25 03:47:16.488621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-25 03:47:16.489148 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-25 03:47:16.489775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-25 03:47:16.490263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-25 03:47:16.491280 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-25 03:47:16.491681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-25 03:47:16.492490 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-25 03:47:16.493718 | orchestrator | 2025-05-25 03:47:16.494301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:47:16.494999 | orchestrator | Sunday 25 May 2025 03:47:16 +0000 (0:00:00.408) 0:00:52.594 ************ 2025-05-25 03:47:16.692248 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:16.693239 | orchestrator | 2025-05-25 03:47:16.693891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:47:16.695361 | orchestrator | Sunday 25 May 2025 03:47:16 +0000 (0:00:00.209) 0:00:52.804 ************ 2025-05-25 03:47:16.879676 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:16.882127 | orchestrator | 2025-05-25 03:47:16.883408 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:47:16.884314 | orchestrator | Sunday 25 May 2025 03:47:16 +0000 (0:00:00.187) 0:00:52.992 ************ 2025-05-25 03:47:17.515661 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:17.516353 | orchestrator | 2025-05-25 03:47:17.517611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:47:17.518197 | orchestrator | Sunday 25 May 2025 03:47:17 +0000 (0:00:00.632) 0:00:53.624 ************ 2025-05-25 03:47:17.718265 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:17.718368 | orchestrator | 2025-05-25 03:47:17.718804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:47:17.719944 | orchestrator | Sunday 25 May 2025 03:47:17 +0000 (0:00:00.204) 0:00:53.829 ************ 2025-05-25 03:47:17.921875 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:17.922983 | orchestrator | 2025-05-25 03:47:17.927677 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:47:17.928797 | orchestrator | Sunday 25 May 2025 03:47:17 +0000 (0:00:00.203) 0:00:54.033 ************ 2025-05-25 03:47:18.108721 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:18.109377 | orchestrator | 2025-05-25 03:47:18.110333 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:47:18.111069 | orchestrator | Sunday 25 May 2025 03:47:18 +0000 (0:00:00.188) 0:00:54.221 ************ 2025-05-25 03:47:18.309187 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:18.310142 | orchestrator | 2025-05-25 03:47:18.310540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:47:18.311513 | orchestrator | Sunday 25 May 2025 03:47:18 +0000 (0:00:00.198) 0:00:54.420 ************ 2025-05-25 03:47:18.521497 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:18.522129 | orchestrator | 2025-05-25 03:47:18.522918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:47:18.524869 | orchestrator | Sunday 25 May 2025 03:47:18 +0000 (0:00:00.212) 0:00:54.632 ************ 2025-05-25 03:47:19.165487 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-25 03:47:19.165773 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-25 03:47:19.166872 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-25 03:47:19.167791 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-25 03:47:19.169010 | orchestrator | 2025-05-25 03:47:19.169904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:47:19.170851 | orchestrator | Sunday 25 May 2025 03:47:19 +0000 (0:00:00.644) 0:00:55.277 ************ 2025-05-25 03:47:19.360518 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:19.361319 | orchestrator | 2025-05-25 03:47:19.362195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:47:19.363271 | orchestrator | Sunday 25 May 2025 03:47:19 +0000 (0:00:00.195) 0:00:55.473 ************ 2025-05-25 03:47:19.563558 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:19.563671 | orchestrator | 2025-05-25 03:47:19.564756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:47:19.564982 | orchestrator | Sunday 25 May 2025 03:47:19 +0000 (0:00:00.202) 0:00:55.675 ************ 2025-05-25 03:47:19.745130 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:19.745226 | orchestrator | 2025-05-25 03:47:19.745977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 03:47:19.746905 | orchestrator | Sunday 25 May 2025 03:47:19 +0000 (0:00:00.180) 0:00:55.856 ************ 2025-05-25 03:47:19.938257 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:19.938436 | orchestrator | 2025-05-25 03:47:19.939331 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-25 03:47:19.940251 | orchestrator | Sunday 25 May 2025 03:47:19 +0000 (0:00:00.191) 0:00:56.047 ************ 2025-05-25 03:47:20.277513 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:20.278120 | orchestrator | 2025-05-25 03:47:20.278757 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-25 03:47:20.279650 | orchestrator | Sunday 25 May 2025 03:47:20 +0000 (0:00:00.342) 0:00:56.389 ************ 2025-05-25 03:47:20.467301 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33e996ff-67e1-5789-9eb3-97043475c088'}}) 2025-05-25 03:47:20.467632 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3ece5568-3437-595e-b3ba-b2f91a77c86c'}}) 2025-05-25 03:47:20.468836 | orchestrator | 2025-05-25 03:47:20.469840 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-25 03:47:20.470804 | orchestrator | Sunday 25 May 2025 03:47:20 +0000 (0:00:00.189) 0:00:56.579 ************ 2025-05-25 03:47:22.256269 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'}) 2025-05-25 03:47:22.258301 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'}) 2025-05-25 03:47:22.258329 | orchestrator | 2025-05-25 03:47:22.258872 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-25 03:47:22.259582 | orchestrator | Sunday 25 May 2025 03:47:22 +0000 (0:00:01.787) 0:00:58.366 ************ 2025-05-25 03:47:22.422445 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:22.422548 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:22.422565 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:22.422579 | orchestrator | 2025-05-25 03:47:22.422592 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-25 03:47:22.422665 | orchestrator | Sunday 25 May 2025 03:47:22 +0000 (0:00:00.165) 0:00:58.532 ************ 2025-05-25 03:47:23.713357 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'}) 2025-05-25 03:47:23.714587 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'}) 2025-05-25 03:47:23.715764 | orchestrator | 2025-05-25 03:47:23.717181 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-25 03:47:23.718134 | orchestrator | Sunday 25 May 2025 03:47:23 +0000 (0:00:01.290) 0:00:59.823 ************ 2025-05-25 03:47:23.862664 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:23.863693 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:23.864564 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:23.866826 | orchestrator | 2025-05-25 03:47:23.867897 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-25 03:47:23.868939 | orchestrator | Sunday 25 May 2025 03:47:23 +0000 (0:00:00.152) 0:00:59.975 ************ 2025-05-25 03:47:24.003468 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:24.003941 | orchestrator | 2025-05-25 03:47:24.005398 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-25 03:47:24.006553 | orchestrator | Sunday 25 May 2025 03:47:23 +0000 (0:00:00.140) 0:01:00.115 ************ 2025-05-25 03:47:24.145024 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:24.145502 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:24.147770 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:24.147909 | orchestrator | 2025-05-25 03:47:24.149928 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-25 03:47:24.150844 | orchestrator | Sunday 25 May 2025 03:47:24 +0000 (0:00:00.141) 0:01:00.257 ************ 2025-05-25 03:47:24.272062 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:24.272314 | orchestrator | 2025-05-25 03:47:24.273567 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-25 03:47:24.274374 | orchestrator | Sunday 25 May 2025 03:47:24 +0000 (0:00:00.126) 0:01:00.383 ************ 2025-05-25 03:47:24.421907 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:24.423279 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:24.424001 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:24.424770 | orchestrator | 2025-05-25 03:47:24.425421 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-25 03:47:24.425863 | orchestrator | Sunday 25 May 2025 03:47:24 +0000 (0:00:00.147) 0:01:00.531 ************ 2025-05-25 03:47:24.553292 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:24.554266 | orchestrator | 2025-05-25 03:47:24.555009 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-25 03:47:24.555601 | orchestrator | Sunday 25 May 2025 03:47:24 +0000 (0:00:00.134) 0:01:00.665 ************ 2025-05-25 03:47:24.703760 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:24.704655 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:24.705258 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:24.705671 | orchestrator | 2025-05-25 03:47:24.707531 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-25 03:47:24.707563 | orchestrator | Sunday 25 May 2025 03:47:24 +0000 (0:00:00.150) 0:01:00.816 ************ 2025-05-25 03:47:25.030332 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:47:25.031136 | orchestrator | 2025-05-25 03:47:25.032096 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-25 03:47:25.034572 | orchestrator | Sunday 25 May 2025 03:47:25 +0000 (0:00:00.327) 0:01:01.143 ************ 2025-05-25 03:47:25.200323 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:25.201044 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:25.201916 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:25.202527 | orchestrator | 2025-05-25 03:47:25.205339 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-25 03:47:25.205652 | orchestrator | Sunday 25 May 2025 03:47:25 +0000 (0:00:00.169) 0:01:01.313 ************ 2025-05-25 03:47:25.352076 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:25.352392 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:25.353664 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:25.354009 | orchestrator | 2025-05-25 03:47:25.354681 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-25 03:47:25.355209 | orchestrator | Sunday 25 May 2025 03:47:25 +0000 (0:00:00.150) 0:01:01.464 ************ 2025-05-25 03:47:25.504665 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:25.505035 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:25.505872 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:25.505966 | orchestrator | 2025-05-25 03:47:25.506875 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-25 03:47:25.507460 | orchestrator | Sunday 25 May 2025 03:47:25 +0000 (0:00:00.152) 0:01:01.616 ************ 2025-05-25 03:47:25.633374 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:25.633956 | orchestrator | 2025-05-25 03:47:25.634647 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-25 03:47:25.635686 | orchestrator | Sunday 25 May 2025 03:47:25 +0000 (0:00:00.129) 0:01:01.746 ************ 2025-05-25 03:47:25.766210 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:25.766673 | orchestrator | 2025-05-25 03:47:25.767649 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-25 03:47:25.769555 | orchestrator | Sunday 25 May 2025 03:47:25 +0000 (0:00:00.132) 0:01:01.878 ************ 2025-05-25 03:47:25.905397 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:25.906535 | orchestrator | 2025-05-25 03:47:25.908009 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-25 03:47:25.910530 | orchestrator | Sunday 25 May 2025 03:47:25 +0000 (0:00:00.139) 0:01:02.018 ************ 2025-05-25 03:47:26.051206 | orchestrator | ok: [testbed-node-5] => { 2025-05-25 03:47:26.052202 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-25 03:47:26.055315 | orchestrator | } 2025-05-25 03:47:26.056255 | orchestrator | 2025-05-25 03:47:26.057492 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-25 03:47:26.057858 | orchestrator | Sunday 25 May 2025 03:47:26 +0000 (0:00:00.144) 0:01:02.162 ************ 2025-05-25 03:47:26.204628 | orchestrator | ok: [testbed-node-5] => { 2025-05-25 03:47:26.205156 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-25 03:47:26.208722 | orchestrator | } 2025-05-25 03:47:26.209301 | orchestrator | 2025-05-25 03:47:26.210343 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-25 03:47:26.210980 | orchestrator | Sunday 25 May 2025 03:47:26 +0000 (0:00:00.153) 0:01:02.316 ************ 2025-05-25 03:47:26.356960 | orchestrator | ok: [testbed-node-5] => { 2025-05-25 03:47:26.358090 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-25 03:47:26.359856 | orchestrator | } 2025-05-25 03:47:26.362890 | orchestrator | 2025-05-25 03:47:26.363573 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-25 03:47:26.364999 | orchestrator | Sunday 25 May 2025 03:47:26 +0000 (0:00:00.152) 0:01:02.468 ************ 2025-05-25 03:47:26.868397 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:47:26.869577 | orchestrator | 2025-05-25 03:47:26.869619 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-25 03:47:26.870260 | orchestrator | Sunday 25 May 2025 03:47:26 +0000 (0:00:00.511) 0:01:02.980 ************ 2025-05-25 03:47:27.371733 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:47:27.371833 | orchestrator | 2025-05-25 03:47:27.372230 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-25 03:47:27.373152 | orchestrator | Sunday 25 May 2025 03:47:27 +0000 (0:00:00.503) 0:01:03.484 ************ 2025-05-25 03:47:28.126153 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:47:28.126686 | orchestrator | 2025-05-25 03:47:28.127660 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-25 03:47:28.128375 | orchestrator | Sunday 25 May 2025 03:47:28 +0000 (0:00:00.754) 0:01:04.238 ************ 2025-05-25 03:47:28.268011 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:47:28.269751 | orchestrator | 2025-05-25 03:47:28.269809 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-25 03:47:28.269886 | orchestrator | Sunday 25 May 2025 03:47:28 +0000 (0:00:00.142) 0:01:04.381 ************ 2025-05-25 03:47:28.377307 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:28.377984 | orchestrator | 2025-05-25 03:47:28.378687 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-25 03:47:28.379340 | orchestrator | Sunday 25 May 2025 03:47:28 +0000 (0:00:00.108) 0:01:04.490 ************ 2025-05-25 03:47:28.491807 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:28.491879 | orchestrator | 2025-05-25 03:47:28.492947 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-25 03:47:28.493628 | orchestrator | Sunday 25 May 2025 03:47:28 +0000 (0:00:00.114) 0:01:04.604 ************ 2025-05-25 03:47:28.641986 | orchestrator | ok: [testbed-node-5] => { 2025-05-25 03:47:28.642710 | orchestrator |  "vgs_report": { 2025-05-25 03:47:28.643217 | orchestrator |  "vg": [] 2025-05-25 03:47:28.644525 | orchestrator |  } 2025-05-25 03:47:28.646068 | orchestrator | } 2025-05-25 03:47:28.646139 | orchestrator | 2025-05-25 03:47:28.646473 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-25 03:47:28.647187 | orchestrator | Sunday 25 May 2025 03:47:28 +0000 (0:00:00.150) 0:01:04.754 ************ 2025-05-25 03:47:28.773228 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:28.774237 | orchestrator | 2025-05-25 03:47:28.775457 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-25 03:47:28.776894 | orchestrator | Sunday 25 May 2025 03:47:28 +0000 (0:00:00.130) 0:01:04.885 ************ 2025-05-25 03:47:28.912609 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:28.912689 | orchestrator | 2025-05-25 03:47:28.913712 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-25 03:47:28.914163 | orchestrator | Sunday 25 May 2025 03:47:28 +0000 (0:00:00.137) 0:01:05.023 ************ 2025-05-25 03:47:29.045609 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:29.046368 | orchestrator | 2025-05-25 03:47:29.047322 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-25 03:47:29.048301 | orchestrator | Sunday 25 May 2025 03:47:29 +0000 (0:00:00.135) 0:01:05.158 ************ 2025-05-25 03:47:29.178600 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:29.179666 | orchestrator | 2025-05-25 03:47:29.180419 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-25 03:47:29.181947 | orchestrator | Sunday 25 May 2025 03:47:29 +0000 (0:00:00.132) 0:01:05.290 ************ 2025-05-25 03:47:29.326499 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:29.326606 | orchestrator | 2025-05-25 03:47:29.327997 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-25 03:47:29.328038 | orchestrator | Sunday 25 May 2025 03:47:29 +0000 (0:00:00.147) 0:01:05.438 ************ 2025-05-25 03:47:29.457837 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:29.459569 | orchestrator | 2025-05-25 03:47:29.464063 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-25 03:47:29.465293 | orchestrator | Sunday 25 May 2025 03:47:29 +0000 (0:00:00.130) 0:01:05.569 ************ 2025-05-25 03:47:29.594792 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:29.595713 | orchestrator | 2025-05-25 03:47:29.596600 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-25 03:47:29.597498 | orchestrator | Sunday 25 May 2025 03:47:29 +0000 (0:00:00.138) 0:01:05.707 ************ 2025-05-25 03:47:29.734616 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:29.735795 | orchestrator | 2025-05-25 03:47:29.736317 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-25 03:47:29.737078 | orchestrator | Sunday 25 May 2025 03:47:29 +0000 (0:00:00.139) 0:01:05.847 ************ 2025-05-25 03:47:30.080276 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:30.080968 | orchestrator | 2025-05-25 03:47:30.081384 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-25 03:47:30.082085 | orchestrator | Sunday 25 May 2025 03:47:30 +0000 (0:00:00.346) 0:01:06.193 ************ 2025-05-25 03:47:30.204446 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:30.206555 | orchestrator | 2025-05-25 03:47:30.208227 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-25 03:47:30.209769 | orchestrator | Sunday 25 May 2025 03:47:30 +0000 (0:00:00.123) 0:01:06.317 ************ 2025-05-25 03:47:30.343601 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:30.344026 | orchestrator | 2025-05-25 03:47:30.345222 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-25 03:47:30.345911 | orchestrator | Sunday 25 May 2025 03:47:30 +0000 (0:00:00.139) 0:01:06.456 ************ 2025-05-25 03:47:30.479609 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:30.481051 | orchestrator | 2025-05-25 03:47:30.483357 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-25 03:47:30.483465 | orchestrator | Sunday 25 May 2025 03:47:30 +0000 (0:00:00.135) 0:01:06.591 ************ 2025-05-25 03:47:30.605006 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:30.606228 | orchestrator | 2025-05-25 03:47:30.607404 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-25 03:47:30.607982 | orchestrator | Sunday 25 May 2025 03:47:30 +0000 (0:00:00.125) 0:01:06.717 ************ 2025-05-25 03:47:30.746939 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:30.747548 | orchestrator | 2025-05-25 03:47:30.747896 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-25 03:47:30.748412 | orchestrator | Sunday 25 May 2025 03:47:30 +0000 (0:00:00.143) 0:01:06.860 ************ 2025-05-25 03:47:30.895764 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:30.896841 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:30.897764 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:30.899597 | orchestrator | 2025-05-25 03:47:30.899684 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-25 03:47:30.900654 | orchestrator | Sunday 25 May 2025 03:47:30 +0000 (0:00:00.148) 0:01:07.008 ************ 2025-05-25 03:47:31.044546 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:31.044860 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:31.048494 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:31.048693 | orchestrator | 2025-05-25 03:47:31.049556 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-25 03:47:31.050384 | orchestrator | Sunday 25 May 2025 03:47:31 +0000 (0:00:00.148) 0:01:07.156 ************ 2025-05-25 03:47:31.191731 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:31.191922 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:31.193325 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:31.194168 | orchestrator | 2025-05-25 03:47:31.196201 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-25 03:47:31.196771 | orchestrator | Sunday 25 May 2025 03:47:31 +0000 (0:00:00.146) 0:01:07.303 ************ 2025-05-25 03:47:31.336547 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:31.337462 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:31.339808 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:31.340177 | orchestrator | 2025-05-25 03:47:31.341170 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-25 03:47:31.341698 | orchestrator | Sunday 25 May 2025 03:47:31 +0000 (0:00:00.144) 0:01:07.448 ************ 2025-05-25 03:47:31.498792 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:31.500295 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:31.500951 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:31.501694 | orchestrator | 2025-05-25 03:47:31.502539 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-25 03:47:31.502926 | orchestrator | Sunday 25 May 2025 03:47:31 +0000 (0:00:00.161) 0:01:07.609 ************ 2025-05-25 03:47:31.636210 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:31.636475 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:31.636741 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:31.637828 | orchestrator | 2025-05-25 03:47:31.638968 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-25 03:47:31.639549 | orchestrator | Sunday 25 May 2025 03:47:31 +0000 (0:00:00.139) 0:01:07.749 ************ 2025-05-25 03:47:31.998372 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:31.998568 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:31.999500 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:32.000303 | orchestrator | 2025-05-25 03:47:32.001206 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-25 03:47:32.002850 | orchestrator | Sunday 25 May 2025 03:47:31 +0000 (0:00:00.361) 0:01:08.111 ************ 2025-05-25 03:47:32.148242 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:32.149205 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:32.150087 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:32.151255 | orchestrator | 2025-05-25 03:47:32.151769 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-25 03:47:32.152407 | orchestrator | Sunday 25 May 2025 03:47:32 +0000 (0:00:00.148) 0:01:08.259 ************ 2025-05-25 03:47:32.668937 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:47:32.670399 | orchestrator | 2025-05-25 03:47:32.670572 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-25 03:47:32.671872 | orchestrator | Sunday 25 May 2025 03:47:32 +0000 (0:00:00.518) 0:01:08.778 ************ 2025-05-25 03:47:33.179522 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:47:33.180711 | orchestrator | 2025-05-25 03:47:33.183796 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-25 03:47:33.184649 | orchestrator | Sunday 25 May 2025 03:47:33 +0000 (0:00:00.513) 0:01:09.292 ************ 2025-05-25 03:47:33.344002 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:47:33.344094 | orchestrator | 2025-05-25 03:47:33.344657 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-25 03:47:33.344680 | orchestrator | Sunday 25 May 2025 03:47:33 +0000 (0:00:00.163) 0:01:09.455 ************ 2025-05-25 03:47:33.533136 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'vg_name': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'}) 2025-05-25 03:47:33.533228 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'vg_name': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'}) 2025-05-25 03:47:33.533519 | orchestrator | 2025-05-25 03:47:33.534158 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-25 03:47:33.535030 | orchestrator | Sunday 25 May 2025 03:47:33 +0000 (0:00:00.189) 0:01:09.644 ************ 2025-05-25 03:47:33.695697 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:33.696131 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:33.696774 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:33.697980 | orchestrator | 2025-05-25 03:47:33.698924 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-25 03:47:33.698959 | orchestrator | Sunday 25 May 2025 03:47:33 +0000 (0:00:00.163) 0:01:09.808 ************ 2025-05-25 03:47:33.856616 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:33.858186 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:33.858297 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:33.860800 | orchestrator | 2025-05-25 03:47:33.860894 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-25 03:47:33.861526 | orchestrator | Sunday 25 May 2025 03:47:33 +0000 (0:00:00.160) 0:01:09.968 ************ 2025-05-25 03:47:34.009441 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'})  2025-05-25 03:47:34.009548 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'})  2025-05-25 03:47:34.010813 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:34.011949 | orchestrator | 2025-05-25 03:47:34.012500 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-25 03:47:34.013413 | orchestrator | Sunday 25 May 2025 03:47:34 +0000 (0:00:00.152) 0:01:10.122 ************ 2025-05-25 03:47:34.169706 | orchestrator | ok: [testbed-node-5] => { 2025-05-25 03:47:34.170567 | orchestrator |  "lvm_report": { 2025-05-25 03:47:34.171934 | orchestrator |  "lv": [ 2025-05-25 03:47:34.173804 | orchestrator |  { 2025-05-25 03:47:34.174946 | orchestrator |  "lv_name": "osd-block-33e996ff-67e1-5789-9eb3-97043475c088", 2025-05-25 03:47:34.176532 | orchestrator |  "vg_name": "ceph-33e996ff-67e1-5789-9eb3-97043475c088" 2025-05-25 03:47:34.177545 | orchestrator |  }, 2025-05-25 03:47:34.178562 | orchestrator |  { 2025-05-25 03:47:34.179278 | orchestrator |  "lv_name": "osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c", 2025-05-25 03:47:34.180321 | orchestrator |  "vg_name": "ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c" 2025-05-25 03:47:34.180801 | orchestrator |  } 2025-05-25 03:47:34.181656 | orchestrator |  ], 2025-05-25 03:47:34.182416 | orchestrator |  "pv": [ 2025-05-25 03:47:34.183293 | orchestrator |  { 2025-05-25 03:47:34.183736 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-25 03:47:34.184600 | orchestrator |  "vg_name": "ceph-33e996ff-67e1-5789-9eb3-97043475c088" 2025-05-25 03:47:34.185398 | orchestrator |  }, 2025-05-25 03:47:34.186090 | orchestrator |  { 2025-05-25 03:47:34.186864 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-25 03:47:34.187598 | orchestrator |  "vg_name": "ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c" 2025-05-25 03:47:34.188344 | orchestrator |  } 2025-05-25 03:47:34.188775 | orchestrator |  ] 2025-05-25 03:47:34.189301 | orchestrator |  } 2025-05-25 03:47:34.189794 | orchestrator | } 2025-05-25 03:47:34.190218 | orchestrator | 2025-05-25 03:47:34.190743 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:47:34.191239 | orchestrator | 2025-05-25 03:47:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:47:34.191264 | orchestrator | 2025-05-25 03:47:34 | INFO  | Please wait and do not abort execution. 2025-05-25 03:47:34.191724 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-25 03:47:34.192320 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-25 03:47:34.192728 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-25 03:47:34.193258 | orchestrator | 2025-05-25 03:47:34.193665 | orchestrator | 2025-05-25 03:47:34.194218 | orchestrator | 2025-05-25 03:47:34.194853 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:47:34.195237 | orchestrator | Sunday 25 May 2025 03:47:34 +0000 (0:00:00.159) 0:01:10.281 ************ 2025-05-25 03:47:34.195518 | orchestrator | =============================================================================== 2025-05-25 03:47:34.195963 | orchestrator | Create block VGs -------------------------------------------------------- 5.63s 2025-05-25 03:47:34.196479 | orchestrator | Create block LVs -------------------------------------------------------- 4.02s 2025-05-25 03:47:34.196960 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.84s 2025-05-25 03:47:34.197437 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.82s 2025-05-25 03:47:34.197951 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.53s 2025-05-25 03:47:34.198676 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.53s 2025-05-25 03:47:34.199062 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.48s 2025-05-25 03:47:34.200061 | orchestrator | Add known partitions to the list of available block devices ------------- 1.40s 2025-05-25 03:47:34.200647 | orchestrator | Add known links to the list of available block devices ------------------ 1.21s 2025-05-25 03:47:34.201319 | orchestrator | Add known partitions to the list of available block devices ------------- 1.08s 2025-05-25 03:47:34.201479 | orchestrator | Print LVM report data --------------------------------------------------- 0.95s 2025-05-25 03:47:34.201929 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-05-25 03:47:34.202436 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-05-25 03:47:34.202854 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.76s 2025-05-25 03:47:34.203272 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-05-25 03:47:34.203754 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.67s 2025-05-25 03:47:34.204292 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.66s 2025-05-25 03:47:34.204747 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.65s 2025-05-25 03:47:34.205155 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-05-25 03:47:34.205614 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.64s 2025-05-25 03:47:36.491507 | orchestrator | 2025-05-25 03:47:36 | INFO  | Task 0824ed87-fe04-42f2-82f0-9fee54353fcc (facts) was prepared for execution. 2025-05-25 03:47:36.491640 | orchestrator | 2025-05-25 03:47:36 | INFO  | It takes a moment until task 0824ed87-fe04-42f2-82f0-9fee54353fcc (facts) has been started and output is visible here. 2025-05-25 03:47:40.502589 | orchestrator | 2025-05-25 03:47:40.503394 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-25 03:47:40.503475 | orchestrator | 2025-05-25 03:47:40.505814 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-25 03:47:40.506985 | orchestrator | Sunday 25 May 2025 03:47:40 +0000 (0:00:00.262) 0:00:00.262 ************ 2025-05-25 03:47:41.551563 | orchestrator | ok: [testbed-manager] 2025-05-25 03:47:41.552066 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:47:41.552881 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:47:41.553917 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:47:41.554400 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:47:41.555213 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:47:41.555507 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:47:41.556344 | orchestrator | 2025-05-25 03:47:41.556815 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-25 03:47:41.557863 | orchestrator | Sunday 25 May 2025 03:47:41 +0000 (0:00:01.046) 0:00:01.308 ************ 2025-05-25 03:47:41.709185 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:47:41.788328 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:47:41.868391 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:47:41.947457 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:47:42.027894 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:47:42.766612 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:42.767651 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:42.768441 | orchestrator | 2025-05-25 03:47:42.769496 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-25 03:47:42.770384 | orchestrator | 2025-05-25 03:47:42.771316 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-25 03:47:42.772396 | orchestrator | Sunday 25 May 2025 03:47:42 +0000 (0:00:01.221) 0:00:02.529 ************ 2025-05-25 03:47:48.570301 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:47:48.572569 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:47:48.573720 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:47:48.574685 | orchestrator | ok: [testbed-manager] 2025-05-25 03:47:48.575032 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:47:48.575867 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:47:48.576764 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:47:48.577346 | orchestrator | 2025-05-25 03:47:48.577903 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-25 03:47:48.578505 | orchestrator | 2025-05-25 03:47:48.579535 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-25 03:47:48.579986 | orchestrator | Sunday 25 May 2025 03:47:48 +0000 (0:00:05.804) 0:00:08.334 ************ 2025-05-25 03:47:48.734707 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:47:48.807902 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:47:48.885256 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:47:48.961196 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:47:49.039500 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:47:49.071511 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:47:49.072345 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:47:49.073574 | orchestrator | 2025-05-25 03:47:49.074263 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:47:49.074988 | orchestrator | 2025-05-25 03:47:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 03:47:49.075202 | orchestrator | 2025-05-25 03:47:49 | INFO  | Please wait and do not abort execution. 2025-05-25 03:47:49.076218 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:47:49.076431 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:47:49.077277 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:47:49.077659 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:47:49.078184 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:47:49.078532 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:47:49.078936 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:47:49.079418 | orchestrator | 2025-05-25 03:47:49.080259 | orchestrator | 2025-05-25 03:47:49.081166 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:47:49.081297 | orchestrator | Sunday 25 May 2025 03:47:49 +0000 (0:00:00.502) 0:00:08.836 ************ 2025-05-25 03:47:49.081833 | orchestrator | =============================================================================== 2025-05-25 03:47:49.082301 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.80s 2025-05-25 03:47:49.083573 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.22s 2025-05-25 03:47:49.084396 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.05s 2025-05-25 03:47:49.084750 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-05-25 03:47:49.661777 | orchestrator | 2025-05-25 03:47:49.664294 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun May 25 03:47:49 UTC 2025 2025-05-25 03:47:49.664331 | orchestrator | 2025-05-25 03:47:51.375062 | orchestrator | 2025-05-25 03:47:51 | INFO  | Collection nutshell is prepared for execution 2025-05-25 03:47:51.375215 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [0] - dotfiles 2025-05-25 03:47:51.380962 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [0] - homer 2025-05-25 03:47:51.380993 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [0] - netdata 2025-05-25 03:47:51.381005 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [0] - openstackclient 2025-05-25 03:47:51.381017 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [0] - phpmyadmin 2025-05-25 03:47:51.381028 | orchestrator | 2025-05-25 03:47:51 | INFO  | A [0] - common 2025-05-25 03:47:51.382563 | orchestrator | 2025-05-25 03:47:51 | INFO  | A [1] -- loadbalancer 2025-05-25 03:47:51.382710 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [2] --- opensearch 2025-05-25 03:47:51.382729 | orchestrator | 2025-05-25 03:47:51 | INFO  | A [2] --- mariadb-ng 2025-05-25 03:47:51.382740 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [3] ---- horizon 2025-05-25 03:47:51.382752 | orchestrator | 2025-05-25 03:47:51 | INFO  | A [3] ---- keystone 2025-05-25 03:47:51.384365 | orchestrator | 2025-05-25 03:47:51 | INFO  | A [4] ----- neutron 2025-05-25 03:47:51.384387 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [5] ------ wait-for-nova 2025-05-25 03:47:51.384399 | orchestrator | 2025-05-25 03:47:51 | INFO  | A [5] ------ octavia 2025-05-25 03:47:51.384410 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [4] ----- barbican 2025-05-25 03:47:51.384421 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [4] ----- designate 2025-05-25 03:47:51.384465 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [4] ----- ironic 2025-05-25 03:47:51.384496 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [4] ----- placement 2025-05-25 03:47:51.384530 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [4] ----- magnum 2025-05-25 03:47:51.384541 | orchestrator | 2025-05-25 03:47:51 | INFO  | A [1] -- openvswitch 2025-05-25 03:47:51.384610 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [2] --- ovn 2025-05-25 03:47:51.384625 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [1] -- memcached 2025-05-25 03:47:51.384725 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [1] -- redis 2025-05-25 03:47:51.385023 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [1] -- rabbitmq-ng 2025-05-25 03:47:51.385044 | orchestrator | 2025-05-25 03:47:51 | INFO  | A [0] - kubernetes 2025-05-25 03:47:51.387274 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [1] -- kubeconfig 2025-05-25 03:47:51.387299 | orchestrator | 2025-05-25 03:47:51 | INFO  | A [1] -- copy-kubeconfig 2025-05-25 03:47:51.387743 | orchestrator | 2025-05-25 03:47:51 | INFO  | A [0] - ceph 2025-05-25 03:47:51.388889 | orchestrator | 2025-05-25 03:47:51 | INFO  | A [1] -- ceph-pools 2025-05-25 03:47:51.389072 | orchestrator | 2025-05-25 03:47:51 | INFO  | A [2] --- copy-ceph-keys 2025-05-25 03:47:51.389095 | orchestrator | 2025-05-25 03:47:51 | INFO  | A [3] ---- cephclient 2025-05-25 03:47:51.389136 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-25 03:47:51.389788 | orchestrator | 2025-05-25 03:47:51 | INFO  | A [4] ----- wait-for-keystone 2025-05-25 03:47:51.389816 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-25 03:47:51.389838 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [5] ------ glance 2025-05-25 03:47:51.389857 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [5] ------ cinder 2025-05-25 03:47:51.389878 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [5] ------ nova 2025-05-25 03:47:51.390131 | orchestrator | 2025-05-25 03:47:51 | INFO  | A [4] ----- prometheus 2025-05-25 03:47:51.390252 | orchestrator | 2025-05-25 03:47:51 | INFO  | D [5] ------ grafana 2025-05-25 03:47:51.612452 | orchestrator | 2025-05-25 03:47:51 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-25 03:47:51.612548 | orchestrator | 2025-05-25 03:47:51 | INFO  | Tasks are running in the background 2025-05-25 03:47:54.542722 | orchestrator | 2025-05-25 03:47:54 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-25 03:47:56.661983 | orchestrator | 2025-05-25 03:47:56 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:47:56.662252 | orchestrator | 2025-05-25 03:47:56 | INFO  | Task ef4e3f9f-4eaa-43de-92f6-696f03a7fa18 is in state STARTED 2025-05-25 03:47:56.662361 | orchestrator | 2025-05-25 03:47:56 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:47:56.666097 | orchestrator | 2025-05-25 03:47:56 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:47:56.669521 | orchestrator | 2025-05-25 03:47:56 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:47:56.671742 | orchestrator | 2025-05-25 03:47:56 | INFO  | Task 813ecdc7-db5b-478a-9c6c-d9971c9f3f67 is in state STARTED 2025-05-25 03:47:56.671801 | orchestrator | 2025-05-25 03:47:56 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:47:56.671822 | orchestrator | 2025-05-25 03:47:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:47:59.714997 | orchestrator | 2025-05-25 03:47:59 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:47:59.715168 | orchestrator | 2025-05-25 03:47:59 | INFO  | Task ef4e3f9f-4eaa-43de-92f6-696f03a7fa18 is in state STARTED 2025-05-25 03:47:59.715425 | orchestrator | 2025-05-25 03:47:59 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:47:59.715960 | orchestrator | 2025-05-25 03:47:59 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:47:59.720366 | orchestrator | 2025-05-25 03:47:59 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:47:59.721638 | orchestrator | 2025-05-25 03:47:59 | INFO  | Task 813ecdc7-db5b-478a-9c6c-d9971c9f3f67 is in state STARTED 2025-05-25 03:47:59.721675 | orchestrator | 2025-05-25 03:47:59 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:47:59.721689 | orchestrator | 2025-05-25 03:47:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:02.764619 | orchestrator | 2025-05-25 03:48:02 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:02.766703 | orchestrator | 2025-05-25 03:48:02 | INFO  | Task ef4e3f9f-4eaa-43de-92f6-696f03a7fa18 is in state STARTED 2025-05-25 03:48:02.766741 | orchestrator | 2025-05-25 03:48:02 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:48:02.766754 | orchestrator | 2025-05-25 03:48:02 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:02.766766 | orchestrator | 2025-05-25 03:48:02 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:02.770322 | orchestrator | 2025-05-25 03:48:02 | INFO  | Task 813ecdc7-db5b-478a-9c6c-d9971c9f3f67 is in state STARTED 2025-05-25 03:48:02.774074 | orchestrator | 2025-05-25 03:48:02 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:02.774167 | orchestrator | 2025-05-25 03:48:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:05.823155 | orchestrator | 2025-05-25 03:48:05 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:05.823274 | orchestrator | 2025-05-25 03:48:05 | INFO  | Task ef4e3f9f-4eaa-43de-92f6-696f03a7fa18 is in state STARTED 2025-05-25 03:48:05.823290 | orchestrator | 2025-05-25 03:48:05 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:48:05.823302 | orchestrator | 2025-05-25 03:48:05 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:05.823313 | orchestrator | 2025-05-25 03:48:05 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:05.823324 | orchestrator | 2025-05-25 03:48:05 | INFO  | Task 813ecdc7-db5b-478a-9c6c-d9971c9f3f67 is in state STARTED 2025-05-25 03:48:05.823335 | orchestrator | 2025-05-25 03:48:05 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:05.823346 | orchestrator | 2025-05-25 03:48:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:08.871958 | orchestrator | 2025-05-25 03:48:08 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:08.873347 | orchestrator | 2025-05-25 03:48:08 | INFO  | Task ef4e3f9f-4eaa-43de-92f6-696f03a7fa18 is in state STARTED 2025-05-25 03:48:08.873772 | orchestrator | 2025-05-25 03:48:08 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:48:08.874303 | orchestrator | 2025-05-25 03:48:08 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:08.878626 | orchestrator | 2025-05-25 03:48:08 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:08.878828 | orchestrator | 2025-05-25 03:48:08 | INFO  | Task 813ecdc7-db5b-478a-9c6c-d9971c9f3f67 is in state STARTED 2025-05-25 03:48:08.880276 | orchestrator | 2025-05-25 03:48:08 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:08.880302 | orchestrator | 2025-05-25 03:48:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:11.963167 | orchestrator | 2025-05-25 03:48:11 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:11.963295 | orchestrator | 2025-05-25 03:48:11 | INFO  | Task ef4e3f9f-4eaa-43de-92f6-696f03a7fa18 is in state STARTED 2025-05-25 03:48:11.963313 | orchestrator | 2025-05-25 03:48:11 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:48:11.963416 | orchestrator | 2025-05-25 03:48:11 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:11.967345 | orchestrator | 2025-05-25 03:48:11 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:11.967461 | orchestrator | 2025-05-25 03:48:11 | INFO  | Task 813ecdc7-db5b-478a-9c6c-d9971c9f3f67 is in state STARTED 2025-05-25 03:48:11.970301 | orchestrator | 2025-05-25 03:48:11 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:11.970345 | orchestrator | 2025-05-25 03:48:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:15.037248 | orchestrator | 2025-05-25 03:48:15 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:15.038488 | orchestrator | 2025-05-25 03:48:15 | INFO  | Task ef4e3f9f-4eaa-43de-92f6-696f03a7fa18 is in state STARTED 2025-05-25 03:48:15.043515 | orchestrator | 2025-05-25 03:48:15 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:48:15.049884 | orchestrator | 2025-05-25 03:48:15 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:15.051546 | orchestrator | 2025-05-25 03:48:15 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:15.058502 | orchestrator | 2025-05-25 03:48:15 | INFO  | Task 813ecdc7-db5b-478a-9c6c-d9971c9f3f67 is in state STARTED 2025-05-25 03:48:15.062266 | orchestrator | 2025-05-25 03:48:15 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:15.062300 | orchestrator | 2025-05-25 03:48:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:18.122109 | orchestrator | 2025-05-25 03:48:18 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:18.122960 | orchestrator | 2025-05-25 03:48:18 | INFO  | Task ef4e3f9f-4eaa-43de-92f6-696f03a7fa18 is in state STARTED 2025-05-25 03:48:18.124458 | orchestrator | 2025-05-25 03:48:18 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:48:18.124509 | orchestrator | 2025-05-25 03:48:18 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:18.126725 | orchestrator | 2025-05-25 03:48:18 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:18.126753 | orchestrator | 2025-05-25 03:48:18 | INFO  | Task 813ecdc7-db5b-478a-9c6c-d9971c9f3f67 is in state STARTED 2025-05-25 03:48:18.126766 | orchestrator | 2025-05-25 03:48:18 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:18.126778 | orchestrator | 2025-05-25 03:48:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:21.196797 | orchestrator | 2025-05-25 03:48:21 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:21.203391 | orchestrator | 2025-05-25 03:48:21.203471 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-25 03:48:21.203487 | orchestrator | 2025-05-25 03:48:21.203498 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-25 03:48:21.203510 | orchestrator | Sunday 25 May 2025 03:48:03 +0000 (0:00:00.961) 0:00:00.961 ************ 2025-05-25 03:48:21.203521 | orchestrator | changed: [testbed-manager] 2025-05-25 03:48:21.203533 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:48:21.203544 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:48:21.203555 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:48:21.203566 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:48:21.203577 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:48:21.203588 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:48:21.203599 | orchestrator | 2025-05-25 03:48:21.203611 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-25 03:48:21.203622 | orchestrator | Sunday 25 May 2025 03:48:08 +0000 (0:00:04.184) 0:00:05.146 ************ 2025-05-25 03:48:21.203633 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-25 03:48:21.203645 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-25 03:48:21.203655 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-25 03:48:21.203667 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-25 03:48:21.203678 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-25 03:48:21.203689 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-25 03:48:21.203699 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-25 03:48:21.203710 | orchestrator | 2025-05-25 03:48:21.203722 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-25 03:48:21.203733 | orchestrator | Sunday 25 May 2025 03:48:09 +0000 (0:00:01.762) 0:00:06.908 ************ 2025-05-25 03:48:21.203789 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-25 03:48:08.741992', 'end': '2025-05-25 03:48:08.745621', 'delta': '0:00:00.003629', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-25 03:48:21.203819 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-25 03:48:08.750992', 'end': '2025-05-25 03:48:08.761171', 'delta': '0:00:00.010179', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-25 03:48:21.203839 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-25 03:48:08.759442', 'end': '2025-05-25 03:48:08.766785', 'delta': '0:00:00.007343', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-25 03:48:21.203901 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-25 03:48:08.981278', 'end': '2025-05-25 03:48:08.989586', 'delta': '0:00:00.008308', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-25 03:48:21.203922 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-25 03:48:09.103822', 'end': '2025-05-25 03:48:09.112432', 'delta': '0:00:00.008610', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-25 03:48:21.203942 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-25 03:48:09.349501', 'end': '2025-05-25 03:48:09.357002', 'delta': '0:00:00.007501', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-25 03:48:21.203959 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-25 03:48:09.515976', 'end': '2025-05-25 03:48:09.524970', 'delta': '0:00:00.008994', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-25 03:48:21.203971 | orchestrator | 2025-05-25 03:48:21.203984 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-05-25 03:48:21.203997 | orchestrator | Sunday 25 May 2025 03:48:12 +0000 (0:00:02.635) 0:00:09.544 ************ 2025-05-25 03:48:21.204009 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-25 03:48:21.204035 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-25 03:48:21.204047 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-25 03:48:21.204060 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-25 03:48:21.204073 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-25 03:48:21.204084 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-25 03:48:21.204097 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-25 03:48:21.204109 | orchestrator | 2025-05-25 03:48:21.204146 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-25 03:48:21.204159 | orchestrator | Sunday 25 May 2025 03:48:14 +0000 (0:00:02.182) 0:00:11.726 ************ 2025-05-25 03:48:21.204171 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-25 03:48:21.204183 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-25 03:48:21.204195 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-25 03:48:21.204207 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-25 03:48:21.204219 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-25 03:48:21.204231 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-25 03:48:21.204243 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-25 03:48:21.204255 | orchestrator | 2025-05-25 03:48:21.204268 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:48:21.204289 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:48:21.204304 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:48:21.204318 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:48:21.204330 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:48:21.204342 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:48:21.204352 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:48:21.204363 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:48:21.204374 | orchestrator | 2025-05-25 03:48:21.204385 | orchestrator | 2025-05-25 03:48:21.204396 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:48:21.204407 | orchestrator | Sunday 25 May 2025 03:48:18 +0000 (0:00:03.837) 0:00:15.564 ************ 2025-05-25 03:48:21.204417 | orchestrator | =============================================================================== 2025-05-25 03:48:21.204428 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.18s 2025-05-25 03:48:21.204439 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.84s 2025-05-25 03:48:21.204450 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.64s 2025-05-25 03:48:21.204461 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.18s 2025-05-25 03:48:21.204472 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.76s 2025-05-25 03:48:21.204511 | orchestrator | 2025-05-25 03:48:21 | INFO  | Task ef4e3f9f-4eaa-43de-92f6-696f03a7fa18 is in state SUCCESS 2025-05-25 03:48:21.204589 | orchestrator | 2025-05-25 03:48:21 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:48:21.217759 | orchestrator | 2025-05-25 03:48:21 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:21.217921 | orchestrator | 2025-05-25 03:48:21 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:21.217952 | orchestrator | 2025-05-25 03:48:21 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:48:21.217973 | orchestrator | 2025-05-25 03:48:21 | INFO  | Task 813ecdc7-db5b-478a-9c6c-d9971c9f3f67 is in state STARTED 2025-05-25 03:48:21.217993 | orchestrator | 2025-05-25 03:48:21 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:21.218073 | orchestrator | 2025-05-25 03:48:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:24.258784 | orchestrator | 2025-05-25 03:48:24 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:24.258924 | orchestrator | 2025-05-25 03:48:24 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:48:24.258941 | orchestrator | 2025-05-25 03:48:24 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:24.258953 | orchestrator | 2025-05-25 03:48:24 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:24.259042 | orchestrator | 2025-05-25 03:48:24 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:48:24.259058 | orchestrator | 2025-05-25 03:48:24 | INFO  | Task 813ecdc7-db5b-478a-9c6c-d9971c9f3f67 is in state STARTED 2025-05-25 03:48:24.259069 | orchestrator | 2025-05-25 03:48:24 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:24.259080 | orchestrator | 2025-05-25 03:48:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:27.325872 | orchestrator | 2025-05-25 03:48:27 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:27.325972 | orchestrator | 2025-05-25 03:48:27 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:48:27.325989 | orchestrator | 2025-05-25 03:48:27 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:27.326602 | orchestrator | 2025-05-25 03:48:27 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:27.327214 | orchestrator | 2025-05-25 03:48:27 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:48:27.327862 | orchestrator | 2025-05-25 03:48:27 | INFO  | Task 813ecdc7-db5b-478a-9c6c-d9971c9f3f67 is in state STARTED 2025-05-25 03:48:27.328644 | orchestrator | 2025-05-25 03:48:27 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:27.328673 | orchestrator | 2025-05-25 03:48:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:30.394757 | orchestrator | 2025-05-25 03:48:30 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:30.397800 | orchestrator | 2025-05-25 03:48:30 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:48:30.407293 | orchestrator | 2025-05-25 03:48:30 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:30.411919 | orchestrator | 2025-05-25 03:48:30 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:30.416510 | orchestrator | 2025-05-25 03:48:30 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:48:30.418802 | orchestrator | 2025-05-25 03:48:30 | INFO  | Task 813ecdc7-db5b-478a-9c6c-d9971c9f3f67 is in state STARTED 2025-05-25 03:48:30.420860 | orchestrator | 2025-05-25 03:48:30 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:30.420964 | orchestrator | 2025-05-25 03:48:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:33.474814 | orchestrator | 2025-05-25 03:48:33 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:33.475158 | orchestrator | 2025-05-25 03:48:33 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:48:33.478095 | orchestrator | 2025-05-25 03:48:33 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:33.480691 | orchestrator | 2025-05-25 03:48:33 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:33.482000 | orchestrator | 2025-05-25 03:48:33 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:48:33.484880 | orchestrator | 2025-05-25 03:48:33 | INFO  | Task 813ecdc7-db5b-478a-9c6c-d9971c9f3f67 is in state STARTED 2025-05-25 03:48:33.487625 | orchestrator | 2025-05-25 03:48:33 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:33.487663 | orchestrator | 2025-05-25 03:48:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:36.542792 | orchestrator | 2025-05-25 03:48:36 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:36.545279 | orchestrator | 2025-05-25 03:48:36 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:48:36.546648 | orchestrator | 2025-05-25 03:48:36 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:36.549047 | orchestrator | 2025-05-25 03:48:36 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:36.551648 | orchestrator | 2025-05-25 03:48:36 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:48:36.552695 | orchestrator | 2025-05-25 03:48:36 | INFO  | Task 813ecdc7-db5b-478a-9c6c-d9971c9f3f67 is in state STARTED 2025-05-25 03:48:36.561694 | orchestrator | 2025-05-25 03:48:36 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:36.561751 | orchestrator | 2025-05-25 03:48:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:39.613388 | orchestrator | 2025-05-25 03:48:39 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:39.613457 | orchestrator | 2025-05-25 03:48:39 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:48:39.615327 | orchestrator | 2025-05-25 03:48:39 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:39.618719 | orchestrator | 2025-05-25 03:48:39 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:39.623448 | orchestrator | 2025-05-25 03:48:39 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:48:39.625499 | orchestrator | 2025-05-25 03:48:39 | INFO  | Task 813ecdc7-db5b-478a-9c6c-d9971c9f3f67 is in state SUCCESS 2025-05-25 03:48:39.630070 | orchestrator | 2025-05-25 03:48:39 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:39.630370 | orchestrator | 2025-05-25 03:48:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:42.667625 | orchestrator | 2025-05-25 03:48:42 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:42.667724 | orchestrator | 2025-05-25 03:48:42 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:48:42.669263 | orchestrator | 2025-05-25 03:48:42 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:42.670822 | orchestrator | 2025-05-25 03:48:42 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:42.671888 | orchestrator | 2025-05-25 03:48:42 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:48:42.672157 | orchestrator | 2025-05-25 03:48:42 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:42.672344 | orchestrator | 2025-05-25 03:48:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:45.739108 | orchestrator | 2025-05-25 03:48:45 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:45.739243 | orchestrator | 2025-05-25 03:48:45 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:48:45.739259 | orchestrator | 2025-05-25 03:48:45 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:45.742571 | orchestrator | 2025-05-25 03:48:45 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:45.742624 | orchestrator | 2025-05-25 03:48:45 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:48:45.743340 | orchestrator | 2025-05-25 03:48:45 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:45.744442 | orchestrator | 2025-05-25 03:48:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:48.807685 | orchestrator | 2025-05-25 03:48:48 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:48.809226 | orchestrator | 2025-05-25 03:48:48 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:48:48.809805 | orchestrator | 2025-05-25 03:48:48 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:48.813524 | orchestrator | 2025-05-25 03:48:48 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:48.813582 | orchestrator | 2025-05-25 03:48:48 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:48:48.813597 | orchestrator | 2025-05-25 03:48:48 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:48.813662 | orchestrator | 2025-05-25 03:48:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:51.886633 | orchestrator | 2025-05-25 03:48:51 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:51.895717 | orchestrator | 2025-05-25 03:48:51 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state STARTED 2025-05-25 03:48:51.895810 | orchestrator | 2025-05-25 03:48:51 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:51.895826 | orchestrator | 2025-05-25 03:48:51 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:51.901602 | orchestrator | 2025-05-25 03:48:51 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:48:51.904890 | orchestrator | 2025-05-25 03:48:51 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:51.904922 | orchestrator | 2025-05-25 03:48:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:54.957185 | orchestrator | 2025-05-25 03:48:54 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:54.958999 | orchestrator | 2025-05-25 03:48:54 | INFO  | Task df7dfa4a-5144-4fe1-9308-9b881a115eb7 is in state SUCCESS 2025-05-25 03:48:54.963341 | orchestrator | 2025-05-25 03:48:54 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:54.967089 | orchestrator | 2025-05-25 03:48:54 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:54.971368 | orchestrator | 2025-05-25 03:48:54 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:48:54.979423 | orchestrator | 2025-05-25 03:48:54 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:54.979479 | orchestrator | 2025-05-25 03:48:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:48:58.047192 | orchestrator | 2025-05-25 03:48:58 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:48:58.048802 | orchestrator | 2025-05-25 03:48:58 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:48:58.051257 | orchestrator | 2025-05-25 03:48:58 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:48:58.054156 | orchestrator | 2025-05-25 03:48:58 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:48:58.056198 | orchestrator | 2025-05-25 03:48:58 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:48:58.056319 | orchestrator | 2025-05-25 03:48:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:01.120641 | orchestrator | 2025-05-25 03:49:01 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:01.120713 | orchestrator | 2025-05-25 03:49:01 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:01.121219 | orchestrator | 2025-05-25 03:49:01 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:49:01.126294 | orchestrator | 2025-05-25 03:49:01 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:01.127819 | orchestrator | 2025-05-25 03:49:01 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:01.133580 | orchestrator | 2025-05-25 03:49:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:04.195490 | orchestrator | 2025-05-25 03:49:04 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:04.195979 | orchestrator | 2025-05-25 03:49:04 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:04.197994 | orchestrator | 2025-05-25 03:49:04 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state STARTED 2025-05-25 03:49:04.200892 | orchestrator | 2025-05-25 03:49:04 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:04.201566 | orchestrator | 2025-05-25 03:49:04 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:04.202516 | orchestrator | 2025-05-25 03:49:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:07.252452 | orchestrator | 2025-05-25 03:49:07 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:07.252751 | orchestrator | 2025-05-25 03:49:07 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:07.255452 | orchestrator | 2025-05-25 03:49:07.255541 | orchestrator | 2025-05-25 03:49:07.255564 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-25 03:49:07.255578 | orchestrator | 2025-05-25 03:49:07.255589 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-25 03:49:07.255602 | orchestrator | Sunday 25 May 2025 03:48:03 +0000 (0:00:00.545) 0:00:00.545 ************ 2025-05-25 03:49:07.255613 | orchestrator | ok: [testbed-manager] => { 2025-05-25 03:49:07.255626 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-25 03:49:07.255660 | orchestrator | } 2025-05-25 03:49:07.255671 | orchestrator | 2025-05-25 03:49:07.255683 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-25 03:49:07.255693 | orchestrator | Sunday 25 May 2025 03:48:03 +0000 (0:00:00.376) 0:00:00.922 ************ 2025-05-25 03:49:07.255704 | orchestrator | ok: [testbed-manager] 2025-05-25 03:49:07.255716 | orchestrator | 2025-05-25 03:49:07.255727 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-25 03:49:07.255738 | orchestrator | Sunday 25 May 2025 03:48:05 +0000 (0:00:02.100) 0:00:03.022 ************ 2025-05-25 03:49:07.255749 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-25 03:49:07.255759 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-25 03:49:07.255771 | orchestrator | 2025-05-25 03:49:07.255782 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-25 03:49:07.255793 | orchestrator | Sunday 25 May 2025 03:48:06 +0000 (0:00:01.297) 0:00:04.320 ************ 2025-05-25 03:49:07.255803 | orchestrator | changed: [testbed-manager] 2025-05-25 03:49:07.255814 | orchestrator | 2025-05-25 03:49:07.255825 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-25 03:49:07.255835 | orchestrator | Sunday 25 May 2025 03:48:08 +0000 (0:00:01.990) 0:00:06.310 ************ 2025-05-25 03:49:07.255846 | orchestrator | changed: [testbed-manager] 2025-05-25 03:49:07.255857 | orchestrator | 2025-05-25 03:49:07.255867 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-25 03:49:07.255878 | orchestrator | Sunday 25 May 2025 03:48:10 +0000 (0:00:01.889) 0:00:08.200 ************ 2025-05-25 03:49:07.255890 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-25 03:49:07.255901 | orchestrator | ok: [testbed-manager] 2025-05-25 03:49:07.255912 | orchestrator | 2025-05-25 03:49:07.255922 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-25 03:49:07.255933 | orchestrator | Sunday 25 May 2025 03:48:35 +0000 (0:00:24.175) 0:00:32.375 ************ 2025-05-25 03:49:07.255944 | orchestrator | changed: [testbed-manager] 2025-05-25 03:49:07.255955 | orchestrator | 2025-05-25 03:49:07.255965 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:49:07.255977 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:49:07.255989 | orchestrator | 2025-05-25 03:49:07.256002 | orchestrator | 2025-05-25 03:49:07.256014 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:49:07.256028 | orchestrator | Sunday 25 May 2025 03:48:37 +0000 (0:00:02.188) 0:00:34.564 ************ 2025-05-25 03:49:07.256040 | orchestrator | =============================================================================== 2025-05-25 03:49:07.256053 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.18s 2025-05-25 03:49:07.256065 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.19s 2025-05-25 03:49:07.256078 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.10s 2025-05-25 03:49:07.256089 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.99s 2025-05-25 03:49:07.256100 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.89s 2025-05-25 03:49:07.256110 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.30s 2025-05-25 03:49:07.256154 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.38s 2025-05-25 03:49:07.256174 | orchestrator | 2025-05-25 03:49:07.256195 | orchestrator | 2025-05-25 03:49:07.256212 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-25 03:49:07.256230 | orchestrator | 2025-05-25 03:49:07.256242 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-25 03:49:07.256260 | orchestrator | Sunday 25 May 2025 03:48:04 +0000 (0:00:00.982) 0:00:00.982 ************ 2025-05-25 03:49:07.256278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-25 03:49:07.256290 | orchestrator | 2025-05-25 03:49:07.256301 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-25 03:49:07.256311 | orchestrator | Sunday 25 May 2025 03:48:05 +0000 (0:00:00.632) 0:00:01.615 ************ 2025-05-25 03:49:07.256322 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-25 03:49:07.256333 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-25 03:49:07.256343 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-25 03:49:07.256354 | orchestrator | 2025-05-25 03:49:07.256365 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-25 03:49:07.256376 | orchestrator | Sunday 25 May 2025 03:48:06 +0000 (0:00:01.569) 0:00:03.184 ************ 2025-05-25 03:49:07.256390 | orchestrator | changed: [testbed-manager] 2025-05-25 03:49:07.256408 | orchestrator | 2025-05-25 03:49:07.256426 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-25 03:49:07.256444 | orchestrator | Sunday 25 May 2025 03:48:08 +0000 (0:00:01.478) 0:00:04.663 ************ 2025-05-25 03:49:07.256481 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-25 03:49:07.256501 | orchestrator | ok: [testbed-manager] 2025-05-25 03:49:07.256518 | orchestrator | 2025-05-25 03:49:07.256536 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-25 03:49:07.256553 | orchestrator | Sunday 25 May 2025 03:48:44 +0000 (0:00:36.416) 0:00:41.079 ************ 2025-05-25 03:49:07.256570 | orchestrator | changed: [testbed-manager] 2025-05-25 03:49:07.256589 | orchestrator | 2025-05-25 03:49:07.256605 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-25 03:49:07.256623 | orchestrator | Sunday 25 May 2025 03:48:46 +0000 (0:00:01.666) 0:00:42.746 ************ 2025-05-25 03:49:07.256640 | orchestrator | ok: [testbed-manager] 2025-05-25 03:49:07.256657 | orchestrator | 2025-05-25 03:49:07.256675 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-25 03:49:07.256693 | orchestrator | Sunday 25 May 2025 03:48:47 +0000 (0:00:01.177) 0:00:43.923 ************ 2025-05-25 03:49:07.256711 | orchestrator | changed: [testbed-manager] 2025-05-25 03:49:07.256728 | orchestrator | 2025-05-25 03:49:07.256745 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-25 03:49:07.256764 | orchestrator | Sunday 25 May 2025 03:48:49 +0000 (0:00:01.567) 0:00:45.491 ************ 2025-05-25 03:49:07.256781 | orchestrator | changed: [testbed-manager] 2025-05-25 03:49:07.256800 | orchestrator | 2025-05-25 03:49:07.256817 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-25 03:49:07.256835 | orchestrator | Sunday 25 May 2025 03:48:49 +0000 (0:00:00.786) 0:00:46.277 ************ 2025-05-25 03:49:07.256854 | orchestrator | changed: [testbed-manager] 2025-05-25 03:49:07.256872 | orchestrator | 2025-05-25 03:49:07.256891 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-25 03:49:07.256908 | orchestrator | Sunday 25 May 2025 03:48:50 +0000 (0:00:00.712) 0:00:46.989 ************ 2025-05-25 03:49:07.256927 | orchestrator | ok: [testbed-manager] 2025-05-25 03:49:07.256945 | orchestrator | 2025-05-25 03:49:07.256963 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:49:07.256982 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:49:07.257000 | orchestrator | 2025-05-25 03:49:07.257017 | orchestrator | 2025-05-25 03:49:07.257036 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:49:07.257054 | orchestrator | Sunday 25 May 2025 03:48:51 +0000 (0:00:00.397) 0:00:47.387 ************ 2025-05-25 03:49:07.257087 | orchestrator | =============================================================================== 2025-05-25 03:49:07.257105 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.42s 2025-05-25 03:49:07.257159 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.67s 2025-05-25 03:49:07.257179 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.57s 2025-05-25 03:49:07.257198 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.57s 2025-05-25 03:49:07.257215 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.48s 2025-05-25 03:49:07.257234 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.18s 2025-05-25 03:49:07.257252 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.79s 2025-05-25 03:49:07.257270 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.71s 2025-05-25 03:49:07.257288 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.63s 2025-05-25 03:49:07.257305 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.40s 2025-05-25 03:49:07.257323 | orchestrator | 2025-05-25 03:49:07.257341 | orchestrator | 2025-05-25 03:49:07.257359 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 03:49:07.257377 | orchestrator | 2025-05-25 03:49:07.257394 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 03:49:07.257412 | orchestrator | Sunday 25 May 2025 03:48:02 +0000 (0:00:00.412) 0:00:00.412 ************ 2025-05-25 03:49:07.257430 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-25 03:49:07.257449 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-25 03:49:07.257468 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-25 03:49:07.257486 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-25 03:49:07.257514 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-25 03:49:07.257533 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-25 03:49:07.257551 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-25 03:49:07.257570 | orchestrator | 2025-05-25 03:49:07.257589 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-25 03:49:07.257607 | orchestrator | 2025-05-25 03:49:07.257624 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-25 03:49:07.257643 | orchestrator | Sunday 25 May 2025 03:48:05 +0000 (0:00:02.350) 0:00:02.763 ************ 2025-05-25 03:49:07.257677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:49:07.257699 | orchestrator | 2025-05-25 03:49:07.257716 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-25 03:49:07.257735 | orchestrator | Sunday 25 May 2025 03:48:07 +0000 (0:00:02.237) 0:00:05.000 ************ 2025-05-25 03:49:07.257753 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:49:07.257772 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:49:07.257790 | orchestrator | ok: [testbed-manager] 2025-05-25 03:49:07.257808 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:49:07.257826 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:49:07.257858 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:49:07.257876 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:49:07.257894 | orchestrator | 2025-05-25 03:49:07.257912 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-25 03:49:07.257930 | orchestrator | Sunday 25 May 2025 03:48:08 +0000 (0:00:01.358) 0:00:06.359 ************ 2025-05-25 03:49:07.257948 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:49:07.257966 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:49:07.257984 | orchestrator | ok: [testbed-manager] 2025-05-25 03:49:07.258222 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:49:07.258249 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:49:07.258267 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:49:07.258285 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:49:07.258303 | orchestrator | 2025-05-25 03:49:07.258321 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-25 03:49:07.258338 | orchestrator | Sunday 25 May 2025 03:48:12 +0000 (0:00:03.534) 0:00:09.893 ************ 2025-05-25 03:49:07.258356 | orchestrator | changed: [testbed-manager] 2025-05-25 03:49:07.258374 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:49:07.258394 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:49:07.258413 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:49:07.258431 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:49:07.258450 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:49:07.258468 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:49:07.258486 | orchestrator | 2025-05-25 03:49:07.258505 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-25 03:49:07.258523 | orchestrator | Sunday 25 May 2025 03:48:15 +0000 (0:00:02.863) 0:00:12.756 ************ 2025-05-25 03:49:07.258542 | orchestrator | changed: [testbed-manager] 2025-05-25 03:49:07.258554 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:49:07.258564 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:49:07.258575 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:49:07.258585 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:49:07.258596 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:49:07.258606 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:49:07.258617 | orchestrator | 2025-05-25 03:49:07.258628 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-25 03:49:07.258638 | orchestrator | Sunday 25 May 2025 03:48:25 +0000 (0:00:10.126) 0:00:22.882 ************ 2025-05-25 03:49:07.258649 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:49:07.258660 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:49:07.258670 | orchestrator | changed: [testbed-manager] 2025-05-25 03:49:07.258681 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:49:07.258691 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:49:07.258702 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:49:07.258712 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:49:07.258723 | orchestrator | 2025-05-25 03:49:07.258734 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-25 03:49:07.258744 | orchestrator | Sunday 25 May 2025 03:48:40 +0000 (0:00:15.733) 0:00:38.616 ************ 2025-05-25 03:49:07.258755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:49:07.258766 | orchestrator | 2025-05-25 03:49:07.258776 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-25 03:49:07.258786 | orchestrator | Sunday 25 May 2025 03:48:42 +0000 (0:00:01.150) 0:00:39.767 ************ 2025-05-25 03:49:07.258795 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-25 03:49:07.258805 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-25 03:49:07.258814 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-25 03:49:07.258824 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-25 03:49:07.258833 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-25 03:49:07.258843 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-25 03:49:07.258852 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-25 03:49:07.258862 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-25 03:49:07.258872 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-25 03:49:07.258881 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-25 03:49:07.258891 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-25 03:49:07.258911 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-25 03:49:07.258920 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-25 03:49:07.258936 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-25 03:49:07.258946 | orchestrator | 2025-05-25 03:49:07.258955 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-25 03:49:07.258965 | orchestrator | Sunday 25 May 2025 03:48:49 +0000 (0:00:07.334) 0:00:47.101 ************ 2025-05-25 03:49:07.258975 | orchestrator | ok: [testbed-manager] 2025-05-25 03:49:07.258985 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:49:07.258994 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:49:07.259004 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:49:07.259013 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:49:07.259023 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:49:07.259032 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:49:07.259041 | orchestrator | 2025-05-25 03:49:07.259051 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-25 03:49:07.259061 | orchestrator | Sunday 25 May 2025 03:48:51 +0000 (0:00:01.837) 0:00:48.939 ************ 2025-05-25 03:49:07.259071 | orchestrator | changed: [testbed-manager] 2025-05-25 03:49:07.259080 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:49:07.259090 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:49:07.259099 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:49:07.259109 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:49:07.259143 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:49:07.259155 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:49:07.259164 | orchestrator | 2025-05-25 03:49:07.259174 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-25 03:49:07.259196 | orchestrator | Sunday 25 May 2025 03:48:53 +0000 (0:00:02.217) 0:00:51.156 ************ 2025-05-25 03:49:07.259206 | orchestrator | ok: [testbed-manager] 2025-05-25 03:49:07.259216 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:49:07.259225 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:49:07.259235 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:49:07.259244 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:49:07.259256 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:49:07.259271 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:49:07.259288 | orchestrator | 2025-05-25 03:49:07.259312 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-25 03:49:07.259330 | orchestrator | Sunday 25 May 2025 03:48:55 +0000 (0:00:01.604) 0:00:52.761 ************ 2025-05-25 03:49:07.259344 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:49:07.259358 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:49:07.259372 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:49:07.259386 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:49:07.259401 | orchestrator | ok: [testbed-manager] 2025-05-25 03:49:07.259416 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:49:07.259430 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:49:07.259446 | orchestrator | 2025-05-25 03:49:07.259461 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-25 03:49:07.259476 | orchestrator | Sunday 25 May 2025 03:48:57 +0000 (0:00:02.062) 0:00:54.823 ************ 2025-05-25 03:49:07.259492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-25 03:49:07.259510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:49:07.259528 | orchestrator | 2025-05-25 03:49:07.259545 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-25 03:49:07.259560 | orchestrator | Sunday 25 May 2025 03:48:58 +0000 (0:00:01.757) 0:00:56.580 ************ 2025-05-25 03:49:07.259577 | orchestrator | changed: [testbed-manager] 2025-05-25 03:49:07.259587 | orchestrator | 2025-05-25 03:49:07.259596 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-25 03:49:07.259615 | orchestrator | Sunday 25 May 2025 03:49:01 +0000 (0:00:02.561) 0:00:59.142 ************ 2025-05-25 03:49:07.259625 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:49:07.259635 | orchestrator | changed: [testbed-manager] 2025-05-25 03:49:07.259644 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:49:07.259654 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:49:07.259663 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:49:07.259673 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:49:07.259682 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:49:07.259692 | orchestrator | 2025-05-25 03:49:07.259701 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:49:07.259711 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:49:07.259721 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:49:07.259731 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:49:07.259740 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:49:07.259750 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:49:07.259760 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:49:07.259769 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:49:07.259778 | orchestrator | 2025-05-25 03:49:07.259788 | orchestrator | 2025-05-25 03:49:07.259798 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:49:07.259807 | orchestrator | Sunday 25 May 2025 03:49:04 +0000 (0:00:03.225) 0:01:02.367 ************ 2025-05-25 03:49:07.259817 | orchestrator | =============================================================================== 2025-05-25 03:49:07.259826 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 15.73s 2025-05-25 03:49:07.259836 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.13s 2025-05-25 03:49:07.259845 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.33s 2025-05-25 03:49:07.259855 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.53s 2025-05-25 03:49:07.259864 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.23s 2025-05-25 03:49:07.259874 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.86s 2025-05-25 03:49:07.259883 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.56s 2025-05-25 03:49:07.259893 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.35s 2025-05-25 03:49:07.259902 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.24s 2025-05-25 03:49:07.259911 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.22s 2025-05-25 03:49:07.259921 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.06s 2025-05-25 03:49:07.259939 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.84s 2025-05-25 03:49:07.259949 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.76s 2025-05-25 03:49:07.259959 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.60s 2025-05-25 03:49:07.259968 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.36s 2025-05-25 03:49:07.259986 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.15s 2025-05-25 03:49:07.259996 | orchestrator | 2025-05-25 03:49:07 | INFO  | Task b1ea9b5e-dd9c-412e-a25f-8ec63ab94323 is in state SUCCESS 2025-05-25 03:49:07.260006 | orchestrator | 2025-05-25 03:49:07 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:07.260016 | orchestrator | 2025-05-25 03:49:07 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:07.260026 | orchestrator | 2025-05-25 03:49:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:10.305174 | orchestrator | 2025-05-25 03:49:10 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:10.305269 | orchestrator | 2025-05-25 03:49:10 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:10.305317 | orchestrator | 2025-05-25 03:49:10 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:10.306679 | orchestrator | 2025-05-25 03:49:10 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:10.306952 | orchestrator | 2025-05-25 03:49:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:13.351983 | orchestrator | 2025-05-25 03:49:13 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:13.353656 | orchestrator | 2025-05-25 03:49:13 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:13.355348 | orchestrator | 2025-05-25 03:49:13 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:13.356820 | orchestrator | 2025-05-25 03:49:13 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:13.356860 | orchestrator | 2025-05-25 03:49:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:16.394826 | orchestrator | 2025-05-25 03:49:16 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:16.396181 | orchestrator | 2025-05-25 03:49:16 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:16.398061 | orchestrator | 2025-05-25 03:49:16 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:16.399267 | orchestrator | 2025-05-25 03:49:16 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:16.399418 | orchestrator | 2025-05-25 03:49:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:19.450666 | orchestrator | 2025-05-25 03:49:19 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:19.452257 | orchestrator | 2025-05-25 03:49:19 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:19.455548 | orchestrator | 2025-05-25 03:49:19 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:19.457888 | orchestrator | 2025-05-25 03:49:19 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:19.457932 | orchestrator | 2025-05-25 03:49:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:22.511950 | orchestrator | 2025-05-25 03:49:22 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:22.513977 | orchestrator | 2025-05-25 03:49:22 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:22.515941 | orchestrator | 2025-05-25 03:49:22 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:22.518447 | orchestrator | 2025-05-25 03:49:22 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:22.518550 | orchestrator | 2025-05-25 03:49:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:25.578373 | orchestrator | 2025-05-25 03:49:25 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:25.578774 | orchestrator | 2025-05-25 03:49:25 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:25.581765 | orchestrator | 2025-05-25 03:49:25 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:25.581934 | orchestrator | 2025-05-25 03:49:25 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:25.583189 | orchestrator | 2025-05-25 03:49:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:28.641821 | orchestrator | 2025-05-25 03:49:28 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:28.641932 | orchestrator | 2025-05-25 03:49:28 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:28.642342 | orchestrator | 2025-05-25 03:49:28 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:28.643554 | orchestrator | 2025-05-25 03:49:28 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:28.644581 | orchestrator | 2025-05-25 03:49:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:31.699486 | orchestrator | 2025-05-25 03:49:31 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:31.704020 | orchestrator | 2025-05-25 03:49:31 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:31.705051 | orchestrator | 2025-05-25 03:49:31 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:31.707915 | orchestrator | 2025-05-25 03:49:31 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:31.707952 | orchestrator | 2025-05-25 03:49:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:34.760786 | orchestrator | 2025-05-25 03:49:34 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:34.761903 | orchestrator | 2025-05-25 03:49:34 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:34.763359 | orchestrator | 2025-05-25 03:49:34 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:34.764552 | orchestrator | 2025-05-25 03:49:34 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:34.764571 | orchestrator | 2025-05-25 03:49:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:37.814865 | orchestrator | 2025-05-25 03:49:37 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:37.818263 | orchestrator | 2025-05-25 03:49:37 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:37.819787 | orchestrator | 2025-05-25 03:49:37 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:37.821342 | orchestrator | 2025-05-25 03:49:37 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:37.822449 | orchestrator | 2025-05-25 03:49:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:40.875788 | orchestrator | 2025-05-25 03:49:40 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:40.877066 | orchestrator | 2025-05-25 03:49:40 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:40.878942 | orchestrator | 2025-05-25 03:49:40 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:40.880257 | orchestrator | 2025-05-25 03:49:40 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:40.880289 | orchestrator | 2025-05-25 03:49:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:43.926442 | orchestrator | 2025-05-25 03:49:43 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:43.927869 | orchestrator | 2025-05-25 03:49:43 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:43.929902 | orchestrator | 2025-05-25 03:49:43 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:43.932313 | orchestrator | 2025-05-25 03:49:43 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:43.932447 | orchestrator | 2025-05-25 03:49:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:46.997561 | orchestrator | 2025-05-25 03:49:46 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:46.998105 | orchestrator | 2025-05-25 03:49:46 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:46.999759 | orchestrator | 2025-05-25 03:49:46 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:47.000912 | orchestrator | 2025-05-25 03:49:46 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:47.000940 | orchestrator | 2025-05-25 03:49:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:50.045422 | orchestrator | 2025-05-25 03:49:50 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:50.046898 | orchestrator | 2025-05-25 03:49:50 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:50.049084 | orchestrator | 2025-05-25 03:49:50 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:50.050066 | orchestrator | 2025-05-25 03:49:50 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:50.050222 | orchestrator | 2025-05-25 03:49:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:53.129806 | orchestrator | 2025-05-25 03:49:53 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:53.129907 | orchestrator | 2025-05-25 03:49:53 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:53.139553 | orchestrator | 2025-05-25 03:49:53 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:53.144201 | orchestrator | 2025-05-25 03:49:53 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:53.144258 | orchestrator | 2025-05-25 03:49:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:56.192740 | orchestrator | 2025-05-25 03:49:56 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:56.195235 | orchestrator | 2025-05-25 03:49:56 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:56.197081 | orchestrator | 2025-05-25 03:49:56 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:56.198733 | orchestrator | 2025-05-25 03:49:56 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:56.198775 | orchestrator | 2025-05-25 03:49:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:49:59.245036 | orchestrator | 2025-05-25 03:49:59 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:49:59.246414 | orchestrator | 2025-05-25 03:49:59 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:49:59.248652 | orchestrator | 2025-05-25 03:49:59 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:49:59.250711 | orchestrator | 2025-05-25 03:49:59 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:49:59.250895 | orchestrator | 2025-05-25 03:49:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:02.288259 | orchestrator | 2025-05-25 03:50:02 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:02.288859 | orchestrator | 2025-05-25 03:50:02 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:02.290631 | orchestrator | 2025-05-25 03:50:02 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:50:02.292007 | orchestrator | 2025-05-25 03:50:02 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:50:02.292044 | orchestrator | 2025-05-25 03:50:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:05.347612 | orchestrator | 2025-05-25 03:50:05 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:05.350798 | orchestrator | 2025-05-25 03:50:05 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:05.351725 | orchestrator | 2025-05-25 03:50:05 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:50:05.353559 | orchestrator | 2025-05-25 03:50:05 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:50:05.353594 | orchestrator | 2025-05-25 03:50:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:08.401427 | orchestrator | 2025-05-25 03:50:08 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:08.405814 | orchestrator | 2025-05-25 03:50:08 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:08.406646 | orchestrator | 2025-05-25 03:50:08 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state STARTED 2025-05-25 03:50:08.409509 | orchestrator | 2025-05-25 03:50:08 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:50:08.409532 | orchestrator | 2025-05-25 03:50:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:11.455112 | orchestrator | 2025-05-25 03:50:11 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:11.456568 | orchestrator | 2025-05-25 03:50:11 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:11.458384 | orchestrator | 2025-05-25 03:50:11 | INFO  | Task b12395a3-e59c-4b67-8638-0a67c0e3b085 is in state SUCCESS 2025-05-25 03:50:11.460272 | orchestrator | 2025-05-25 03:50:11 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:50:11.460364 | orchestrator | 2025-05-25 03:50:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:14.515557 | orchestrator | 2025-05-25 03:50:14 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:14.515686 | orchestrator | 2025-05-25 03:50:14 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:14.517019 | orchestrator | 2025-05-25 03:50:14 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:50:14.517047 | orchestrator | 2025-05-25 03:50:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:17.555967 | orchestrator | 2025-05-25 03:50:17 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:17.556373 | orchestrator | 2025-05-25 03:50:17 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:17.559013 | orchestrator | 2025-05-25 03:50:17 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:50:17.559096 | orchestrator | 2025-05-25 03:50:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:20.595938 | orchestrator | 2025-05-25 03:50:20 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:20.597477 | orchestrator | 2025-05-25 03:50:20 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:20.598454 | orchestrator | 2025-05-25 03:50:20 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:50:20.598540 | orchestrator | 2025-05-25 03:50:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:23.645726 | orchestrator | 2025-05-25 03:50:23 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:23.654572 | orchestrator | 2025-05-25 03:50:23 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:23.655285 | orchestrator | 2025-05-25 03:50:23 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:50:23.655516 | orchestrator | 2025-05-25 03:50:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:26.697279 | orchestrator | 2025-05-25 03:50:26 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:26.698256 | orchestrator | 2025-05-25 03:50:26 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:26.699996 | orchestrator | 2025-05-25 03:50:26 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:50:26.700108 | orchestrator | 2025-05-25 03:50:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:29.747791 | orchestrator | 2025-05-25 03:50:29 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:29.749565 | orchestrator | 2025-05-25 03:50:29 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:29.753045 | orchestrator | 2025-05-25 03:50:29 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state STARTED 2025-05-25 03:50:29.753096 | orchestrator | 2025-05-25 03:50:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:32.798489 | orchestrator | 2025-05-25 03:50:32 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:32.799883 | orchestrator | 2025-05-25 03:50:32 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:32.803028 | orchestrator | 2025-05-25 03:50:32 | INFO  | Task 4f0d98af-dbfb-4167-8dc6-74aeece0199b is in state SUCCESS 2025-05-25 03:50:32.805485 | orchestrator | 2025-05-25 03:50:32.805524 | orchestrator | 2025-05-25 03:50:32.805537 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-25 03:50:32.805549 | orchestrator | 2025-05-25 03:50:32.805560 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-25 03:50:32.805571 | orchestrator | Sunday 25 May 2025 03:48:24 +0000 (0:00:00.229) 0:00:00.229 ************ 2025-05-25 03:50:32.805583 | orchestrator | ok: [testbed-manager] 2025-05-25 03:50:32.805595 | orchestrator | 2025-05-25 03:50:32.805606 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-25 03:50:32.805617 | orchestrator | Sunday 25 May 2025 03:48:25 +0000 (0:00:00.979) 0:00:01.209 ************ 2025-05-25 03:50:32.805628 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-25 03:50:32.805658 | orchestrator | 2025-05-25 03:50:32.805669 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-25 03:50:32.805680 | orchestrator | Sunday 25 May 2025 03:48:26 +0000 (0:00:01.221) 0:00:02.431 ************ 2025-05-25 03:50:32.805691 | orchestrator | changed: [testbed-manager] 2025-05-25 03:50:32.805702 | orchestrator | 2025-05-25 03:50:32.805712 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-25 03:50:32.805723 | orchestrator | Sunday 25 May 2025 03:48:28 +0000 (0:00:01.648) 0:00:04.079 ************ 2025-05-25 03:50:32.805734 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-25 03:50:32.805745 | orchestrator | ok: [testbed-manager] 2025-05-25 03:50:32.805757 | orchestrator | 2025-05-25 03:50:32.805768 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-25 03:50:32.805778 | orchestrator | Sunday 25 May 2025 03:49:50 +0000 (0:01:22.162) 0:01:26.241 ************ 2025-05-25 03:50:32.805789 | orchestrator | changed: [testbed-manager] 2025-05-25 03:50:32.805800 | orchestrator | 2025-05-25 03:50:32.805810 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:50:32.805822 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:50:32.805834 | orchestrator | 2025-05-25 03:50:32.805844 | orchestrator | 2025-05-25 03:50:32.805855 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:50:32.805866 | orchestrator | Sunday 25 May 2025 03:50:09 +0000 (0:00:18.696) 0:01:44.938 ************ 2025-05-25 03:50:32.805877 | orchestrator | =============================================================================== 2025-05-25 03:50:32.805887 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 82.16s 2025-05-25 03:50:32.805898 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 18.70s 2025-05-25 03:50:32.805909 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.65s 2025-05-25 03:50:32.805920 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.22s 2025-05-25 03:50:32.805930 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.98s 2025-05-25 03:50:32.805941 | orchestrator | 2025-05-25 03:50:32.805952 | orchestrator | 2025-05-25 03:50:32.805962 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-25 03:50:32.805973 | orchestrator | 2025-05-25 03:50:32.805984 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-25 03:50:32.805994 | orchestrator | Sunday 25 May 2025 03:47:56 +0000 (0:00:00.258) 0:00:00.258 ************ 2025-05-25 03:50:32.806005 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:50:32.806106 | orchestrator | 2025-05-25 03:50:32.806141 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-25 03:50:32.806154 | orchestrator | Sunday 25 May 2025 03:47:57 +0000 (0:00:01.180) 0:00:01.439 ************ 2025-05-25 03:50:32.806166 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-25 03:50:32.806178 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-25 03:50:32.806190 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-25 03:50:32.806202 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-25 03:50:32.806214 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-25 03:50:32.806226 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-25 03:50:32.806238 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-25 03:50:32.806251 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-25 03:50:32.806272 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-25 03:50:32.806292 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-25 03:50:32.806307 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-25 03:50:32.806319 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-25 03:50:32.806332 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-25 03:50:32.806344 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-25 03:50:32.806357 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-25 03:50:32.806369 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-25 03:50:32.806394 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-25 03:50:32.806408 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-25 03:50:32.806419 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-25 03:50:32.806430 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-25 03:50:32.806441 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-25 03:50:32.806452 | orchestrator | 2025-05-25 03:50:32.806463 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-25 03:50:32.806473 | orchestrator | Sunday 25 May 2025 03:48:01 +0000 (0:00:04.256) 0:00:05.696 ************ 2025-05-25 03:50:32.806484 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:50:32.806497 | orchestrator | 2025-05-25 03:50:32.806508 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-25 03:50:32.806518 | orchestrator | Sunday 25 May 2025 03:48:03 +0000 (0:00:01.195) 0:00:06.891 ************ 2025-05-25 03:50:32.806534 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.806550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.806562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.806573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.806595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.806614 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.806627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.806638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.806650 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.806661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.806679 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.806692 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.806728 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.806753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.806765 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.806777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.806788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.806799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.806817 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.806828 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.806844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.806856 | orchestrator | 2025-05-25 03:50:32.806867 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-25 03:50:32.806878 | orchestrator | Sunday 25 May 2025 03:48:08 +0000 (0:00:05.044) 0:00:11.936 ************ 2025-05-25 03:50:32.806895 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 03:50:32.806907 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.806919 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.806941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 03:50:32.806953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.806971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.806983 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:50:32.806994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 03:50:32.807011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807044 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:50:32.807055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 03:50:32.807067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807095 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:50:32.807107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 03:50:32.807168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807197 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:50:32.807208 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:50:32.807219 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 03:50:32.807237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807260 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:50:32.807271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 03:50:32.807290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807313 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:50:32.807324 | orchestrator | 2025-05-25 03:50:32.807335 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-25 03:50:32.807346 | orchestrator | Sunday 25 May 2025 03:48:09 +0000 (0:00:01.142) 0:00:13.079 ************ 2025-05-25 03:50:32.807357 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 03:50:32.807373 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807389 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807401 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:50:32.807412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 03:50:32.807424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807454 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:50:32.807465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 03:50:32.807477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807504 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:50:32.807515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 03:50:32.807533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 03:50:32.807574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807596 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:50:32.807607 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:50:32.807618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 03:50:32.807634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807670 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:50:32.807681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 03:50:32.807699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.807722 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:50:32.807733 | orchestrator | 2025-05-25 03:50:32.807744 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-25 03:50:32.807755 | orchestrator | Sunday 25 May 2025 03:48:11 +0000 (0:00:02.617) 0:00:15.697 ************ 2025-05-25 03:50:32.807766 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:50:32.807776 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:50:32.807787 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:50:32.807798 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:50:32.807808 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:50:32.807819 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:50:32.807830 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:50:32.807840 | orchestrator | 2025-05-25 03:50:32.807851 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-25 03:50:32.807862 | orchestrator | Sunday 25 May 2025 03:48:12 +0000 (0:00:00.977) 0:00:16.674 ************ 2025-05-25 03:50:32.807873 | orchestrator | skipping: [testbed-manager] 2025-05-25 03:50:32.807883 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:50:32.807894 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:50:32.807905 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:50:32.807915 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:50:32.807926 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:50:32.807936 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:50:32.807947 | orchestrator | 2025-05-25 03:50:32.807958 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-25 03:50:32.807969 | orchestrator | Sunday 25 May 2025 03:48:13 +0000 (0:00:00.926) 0:00:17.601 ************ 2025-05-25 03:50:32.807980 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.807991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.808015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.808028 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.808039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.808059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.808070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.808082 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.808097 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.808174 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.808195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.808207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.808218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.808230 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.808242 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.808253 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.808269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.808293 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.808305 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.808316 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.808327 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.808338 | orchestrator | 2025-05-25 03:50:32.808349 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-25 03:50:32.808360 | orchestrator | Sunday 25 May 2025 03:48:19 +0000 (0:00:06.118) 0:00:23.719 ************ 2025-05-25 03:50:32.808371 | orchestrator | [WARNING]: Skipped 2025-05-25 03:50:32.808383 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-25 03:50:32.808393 | orchestrator | to this access issue: 2025-05-25 03:50:32.808404 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-25 03:50:32.808415 | orchestrator | directory 2025-05-25 03:50:32.808426 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 03:50:32.808437 | orchestrator | 2025-05-25 03:50:32.808447 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-25 03:50:32.808458 | orchestrator | Sunday 25 May 2025 03:48:21 +0000 (0:00:01.860) 0:00:25.579 ************ 2025-05-25 03:50:32.808469 | orchestrator | [WARNING]: Skipped 2025-05-25 03:50:32.808480 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-25 03:50:32.808490 | orchestrator | to this access issue: 2025-05-25 03:50:32.808501 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-25 03:50:32.808512 | orchestrator | directory 2025-05-25 03:50:32.808522 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 03:50:32.808533 | orchestrator | 2025-05-25 03:50:32.808544 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-25 03:50:32.808554 | orchestrator | Sunday 25 May 2025 03:48:22 +0000 (0:00:00.920) 0:00:26.500 ************ 2025-05-25 03:50:32.808565 | orchestrator | [WARNING]: Skipped 2025-05-25 03:50:32.808576 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-25 03:50:32.808592 | orchestrator | to this access issue: 2025-05-25 03:50:32.808603 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-25 03:50:32.808614 | orchestrator | directory 2025-05-25 03:50:32.808625 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 03:50:32.808634 | orchestrator | 2025-05-25 03:50:32.808644 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-25 03:50:32.808653 | orchestrator | Sunday 25 May 2025 03:48:23 +0000 (0:00:00.968) 0:00:27.468 ************ 2025-05-25 03:50:32.808663 | orchestrator | [WARNING]: Skipped 2025-05-25 03:50:32.808673 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-25 03:50:32.808682 | orchestrator | to this access issue: 2025-05-25 03:50:32.808692 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-25 03:50:32.808701 | orchestrator | directory 2025-05-25 03:50:32.808724 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 03:50:32.808744 | orchestrator | 2025-05-25 03:50:32.808754 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-05-25 03:50:32.808764 | orchestrator | Sunday 25 May 2025 03:48:24 +0000 (0:00:00.627) 0:00:28.096 ************ 2025-05-25 03:50:32.808773 | orchestrator | changed: [testbed-manager] 2025-05-25 03:50:32.808783 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:50:32.808792 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:50:32.808806 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:50:32.808816 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:50:32.808825 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:50:32.808835 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:50:32.808844 | orchestrator | 2025-05-25 03:50:32.808854 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-25 03:50:32.808863 | orchestrator | Sunday 25 May 2025 03:48:28 +0000 (0:00:04.676) 0:00:32.772 ************ 2025-05-25 03:50:32.808873 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-25 03:50:32.808882 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-25 03:50:32.808892 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-25 03:50:32.808906 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-25 03:50:32.808916 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-25 03:50:32.808926 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-25 03:50:32.808936 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-25 03:50:32.808945 | orchestrator | 2025-05-25 03:50:32.808955 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-25 03:50:32.808964 | orchestrator | Sunday 25 May 2025 03:48:31 +0000 (0:00:03.029) 0:00:35.801 ************ 2025-05-25 03:50:32.808974 | orchestrator | changed: [testbed-manager] 2025-05-25 03:50:32.808984 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:50:32.808993 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:50:32.809003 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:50:32.809012 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:50:32.809021 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:50:32.809031 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:50:32.809040 | orchestrator | 2025-05-25 03:50:32.809050 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-25 03:50:32.809060 | orchestrator | Sunday 25 May 2025 03:48:34 +0000 (0:00:02.901) 0:00:38.703 ************ 2025-05-25 03:50:32.809069 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.809085 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.809096 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809125 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.809141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.809158 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.809168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.809179 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.809194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.809205 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809215 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809225 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809239 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.809255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.809266 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.809276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.809292 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.809302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:50:32.809312 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809322 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809336 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809346 | orchestrator | 2025-05-25 03:50:32.809356 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-25 03:50:32.809366 | orchestrator | Sunday 25 May 2025 03:48:37 +0000 (0:00:02.881) 0:00:41.585 ************ 2025-05-25 03:50:32.809375 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-25 03:50:32.809385 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-25 03:50:32.809395 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-25 03:50:32.809414 | orchestrator | [0;2025-05-25 03:50:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:32.809424 | orchestrator | 33mchanged: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-25 03:50:32.809434 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-25 03:50:32.809449 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-25 03:50:32.809458 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-25 03:50:32.809468 | orchestrator | 2025-05-25 03:50:32.809478 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-25 03:50:32.809487 | orchestrator | Sunday 25 May 2025 03:48:39 +0000 (0:00:02.242) 0:00:43.827 ************ 2025-05-25 03:50:32.809497 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-25 03:50:32.809506 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-25 03:50:32.809516 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-25 03:50:32.809525 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-25 03:50:32.809535 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-25 03:50:32.809544 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-25 03:50:32.809554 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-25 03:50:32.809563 | orchestrator | 2025-05-25 03:50:32.809573 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-25 03:50:32.809582 | orchestrator | Sunday 25 May 2025 03:48:42 +0000 (0:00:02.236) 0:00:46.064 ************ 2025-05-25 03:50:32.809592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.809603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.809613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.809626 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.809642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.809692 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.809702 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 03:50:32.809754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809765 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809775 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809785 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809809 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:50:32.809861 | orchestrator | 2025-05-25 03:50:32.809871 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-25 03:50:32.809881 | orchestrator | Sunday 25 May 2025 03:48:46 +0000 (0:00:04.098) 0:00:50.162 ************ 2025-05-25 03:50:32.809890 | orchestrator | changed: [testbed-manager] 2025-05-25 03:50:32.809900 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:50:32.809910 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:50:32.809919 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:50:32.809929 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:50:32.809938 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:50:32.809948 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:50:32.809957 | orchestrator | 2025-05-25 03:50:32.809967 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-25 03:50:32.809977 | orchestrator | Sunday 25 May 2025 03:48:48 +0000 (0:00:01.803) 0:00:51.965 ************ 2025-05-25 03:50:32.809986 | orchestrator | changed: [testbed-manager] 2025-05-25 03:50:32.809996 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:50:32.810005 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:50:32.810041 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:50:32.810054 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:50:32.810064 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:50:32.810073 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:50:32.810083 | orchestrator | 2025-05-25 03:50:32.810092 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-25 03:50:32.810102 | orchestrator | Sunday 25 May 2025 03:48:49 +0000 (0:00:01.308) 0:00:53.273 ************ 2025-05-25 03:50:32.810128 | orchestrator | 2025-05-25 03:50:32.810139 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-25 03:50:32.810148 | orchestrator | Sunday 25 May 2025 03:48:49 +0000 (0:00:00.056) 0:00:53.330 ************ 2025-05-25 03:50:32.810158 | orchestrator | 2025-05-25 03:50:32.810167 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-25 03:50:32.810177 | orchestrator | Sunday 25 May 2025 03:48:49 +0000 (0:00:00.064) 0:00:53.395 ************ 2025-05-25 03:50:32.810186 | orchestrator | 2025-05-25 03:50:32.810196 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-25 03:50:32.810205 | orchestrator | Sunday 25 May 2025 03:48:49 +0000 (0:00:00.411) 0:00:53.806 ************ 2025-05-25 03:50:32.810215 | orchestrator | 2025-05-25 03:50:32.810224 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-25 03:50:32.810234 | orchestrator | Sunday 25 May 2025 03:48:49 +0000 (0:00:00.070) 0:00:53.877 ************ 2025-05-25 03:50:32.810243 | orchestrator | 2025-05-25 03:50:32.810253 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-25 03:50:32.810262 | orchestrator | Sunday 25 May 2025 03:48:50 +0000 (0:00:00.067) 0:00:53.944 ************ 2025-05-25 03:50:32.810272 | orchestrator | 2025-05-25 03:50:32.810287 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-25 03:50:32.810302 | orchestrator | Sunday 25 May 2025 03:48:50 +0000 (0:00:00.064) 0:00:54.009 ************ 2025-05-25 03:50:32.810319 | orchestrator | 2025-05-25 03:50:32.810339 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-25 03:50:32.810371 | orchestrator | Sunday 25 May 2025 03:48:50 +0000 (0:00:00.106) 0:00:54.116 ************ 2025-05-25 03:50:32.810387 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:50:32.810403 | orchestrator | changed: [testbed-manager] 2025-05-25 03:50:32.810419 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:50:32.810434 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:50:32.810450 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:50:32.810466 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:50:32.810482 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:50:32.810497 | orchestrator | 2025-05-25 03:50:32.810512 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-25 03:50:32.810522 | orchestrator | Sunday 25 May 2025 03:49:37 +0000 (0:00:46.928) 0:01:41.044 ************ 2025-05-25 03:50:32.810532 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:50:32.810542 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:50:32.810551 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:50:32.810560 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:50:32.810570 | orchestrator | changed: [testbed-manager] 2025-05-25 03:50:32.810579 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:50:32.810588 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:50:32.810598 | orchestrator | 2025-05-25 03:50:32.810607 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-25 03:50:32.810617 | orchestrator | Sunday 25 May 2025 03:50:20 +0000 (0:00:43.009) 0:02:24.054 ************ 2025-05-25 03:50:32.810627 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:50:32.810636 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:50:32.810661 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:50:32.810671 | orchestrator | ok: [testbed-manager] 2025-05-25 03:50:32.810680 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:50:32.810690 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:50:32.810699 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:50:32.810709 | orchestrator | 2025-05-25 03:50:32.810718 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-25 03:50:32.810728 | orchestrator | Sunday 25 May 2025 03:50:22 +0000 (0:00:02.114) 0:02:26.169 ************ 2025-05-25 03:50:32.810738 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:50:32.810752 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:50:32.810762 | orchestrator | changed: [testbed-manager] 2025-05-25 03:50:32.810772 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:50:32.810781 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:50:32.810791 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:50:32.810800 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:50:32.810809 | orchestrator | 2025-05-25 03:50:32.810819 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:50:32.810830 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-25 03:50:32.810840 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-25 03:50:32.810876 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-25 03:50:32.810888 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-25 03:50:32.810897 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-25 03:50:32.810907 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-25 03:50:32.810917 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-25 03:50:32.810934 | orchestrator | 2025-05-25 03:50:32.810943 | orchestrator | 2025-05-25 03:50:32.810953 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:50:32.810962 | orchestrator | Sunday 25 May 2025 03:50:31 +0000 (0:00:09.181) 0:02:35.351 ************ 2025-05-25 03:50:32.810972 | orchestrator | =============================================================================== 2025-05-25 03:50:32.810981 | orchestrator | common : Restart fluentd container ------------------------------------- 46.93s 2025-05-25 03:50:32.810991 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 43.01s 2025-05-25 03:50:32.811000 | orchestrator | common : Restart cron container ----------------------------------------- 9.18s 2025-05-25 03:50:32.811010 | orchestrator | common : Copying over config.json files for services -------------------- 6.12s 2025-05-25 03:50:32.811019 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.04s 2025-05-25 03:50:32.811029 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.68s 2025-05-25 03:50:32.811038 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.26s 2025-05-25 03:50:32.811047 | orchestrator | common : Check common containers ---------------------------------------- 4.10s 2025-05-25 03:50:32.811057 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.03s 2025-05-25 03:50:32.811066 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.90s 2025-05-25 03:50:32.811076 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.88s 2025-05-25 03:50:32.811085 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.62s 2025-05-25 03:50:32.811095 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.24s 2025-05-25 03:50:32.811104 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.24s 2025-05-25 03:50:32.811160 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.11s 2025-05-25 03:50:32.811171 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.86s 2025-05-25 03:50:32.811180 | orchestrator | common : Creating log volume -------------------------------------------- 1.80s 2025-05-25 03:50:32.811190 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.31s 2025-05-25 03:50:32.811199 | orchestrator | common : include_tasks -------------------------------------------------- 1.20s 2025-05-25 03:50:32.811214 | orchestrator | common : include_tasks -------------------------------------------------- 1.18s 2025-05-25 03:50:35.867899 | orchestrator | 2025-05-25 03:50:35 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:35.868006 | orchestrator | 2025-05-25 03:50:35 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:35.868289 | orchestrator | 2025-05-25 03:50:35 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:50:35.869169 | orchestrator | 2025-05-25 03:50:35 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:50:35.876870 | orchestrator | 2025-05-25 03:50:35 | INFO  | Task 5f6fb199-9bd9-4dc8-9384-b0da3d2fe9b3 is in state STARTED 2025-05-25 03:50:35.877603 | orchestrator | 2025-05-25 03:50:35 | INFO  | Task 1b3b6d6f-94c9-4e28-983e-91ecfe416b9a is in state STARTED 2025-05-25 03:50:35.878659 | orchestrator | 2025-05-25 03:50:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:38.931492 | orchestrator | 2025-05-25 03:50:38 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:38.931627 | orchestrator | 2025-05-25 03:50:38 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:38.931933 | orchestrator | 2025-05-25 03:50:38 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:50:38.932491 | orchestrator | 2025-05-25 03:50:38 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:50:38.933169 | orchestrator | 2025-05-25 03:50:38 | INFO  | Task 5f6fb199-9bd9-4dc8-9384-b0da3d2fe9b3 is in state STARTED 2025-05-25 03:50:38.933876 | orchestrator | 2025-05-25 03:50:38 | INFO  | Task 1b3b6d6f-94c9-4e28-983e-91ecfe416b9a is in state STARTED 2025-05-25 03:50:38.933902 | orchestrator | 2025-05-25 03:50:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:41.964951 | orchestrator | 2025-05-25 03:50:41 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:41.965053 | orchestrator | 2025-05-25 03:50:41 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:41.965805 | orchestrator | 2025-05-25 03:50:41 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:50:41.967889 | orchestrator | 2025-05-25 03:50:41 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:50:41.968696 | orchestrator | 2025-05-25 03:50:41 | INFO  | Task 5f6fb199-9bd9-4dc8-9384-b0da3d2fe9b3 is in state STARTED 2025-05-25 03:50:41.970216 | orchestrator | 2025-05-25 03:50:41 | INFO  | Task 1b3b6d6f-94c9-4e28-983e-91ecfe416b9a is in state STARTED 2025-05-25 03:50:41.970257 | orchestrator | 2025-05-25 03:50:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:45.004743 | orchestrator | 2025-05-25 03:50:44 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:45.004892 | orchestrator | 2025-05-25 03:50:45 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:45.004910 | orchestrator | 2025-05-25 03:50:45 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:50:45.004992 | orchestrator | 2025-05-25 03:50:45 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:50:45.013370 | orchestrator | 2025-05-25 03:50:45 | INFO  | Task 5f6fb199-9bd9-4dc8-9384-b0da3d2fe9b3 is in state STARTED 2025-05-25 03:50:45.013447 | orchestrator | 2025-05-25 03:50:45 | INFO  | Task 1b3b6d6f-94c9-4e28-983e-91ecfe416b9a is in state STARTED 2025-05-25 03:50:45.013467 | orchestrator | 2025-05-25 03:50:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:48.051839 | orchestrator | 2025-05-25 03:50:48 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:48.052068 | orchestrator | 2025-05-25 03:50:48 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:48.052905 | orchestrator | 2025-05-25 03:50:48 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:50:48.053448 | orchestrator | 2025-05-25 03:50:48 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:50:48.054620 | orchestrator | 2025-05-25 03:50:48 | INFO  | Task 5f6fb199-9bd9-4dc8-9384-b0da3d2fe9b3 is in state STARTED 2025-05-25 03:50:48.055380 | orchestrator | 2025-05-25 03:50:48 | INFO  | Task 1b3b6d6f-94c9-4e28-983e-91ecfe416b9a is in state STARTED 2025-05-25 03:50:48.055405 | orchestrator | 2025-05-25 03:50:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:51.086577 | orchestrator | 2025-05-25 03:50:51 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:51.086811 | orchestrator | 2025-05-25 03:50:51 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:51.089263 | orchestrator | 2025-05-25 03:50:51 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:50:51.090933 | orchestrator | 2025-05-25 03:50:51 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:50:51.091408 | orchestrator | 2025-05-25 03:50:51 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:50:51.092403 | orchestrator | 2025-05-25 03:50:51 | INFO  | Task 5f6fb199-9bd9-4dc8-9384-b0da3d2fe9b3 is in state STARTED 2025-05-25 03:50:51.093509 | orchestrator | 2025-05-25 03:50:51 | INFO  | Task 1b3b6d6f-94c9-4e28-983e-91ecfe416b9a is in state SUCCESS 2025-05-25 03:50:51.093564 | orchestrator | 2025-05-25 03:50:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:54.126887 | orchestrator | 2025-05-25 03:50:54 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:54.128022 | orchestrator | 2025-05-25 03:50:54 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:54.128867 | orchestrator | 2025-05-25 03:50:54 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:50:54.131961 | orchestrator | 2025-05-25 03:50:54 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:50:54.132345 | orchestrator | 2025-05-25 03:50:54 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:50:54.133183 | orchestrator | 2025-05-25 03:50:54 | INFO  | Task 5f6fb199-9bd9-4dc8-9384-b0da3d2fe9b3 is in state STARTED 2025-05-25 03:50:54.133240 | orchestrator | 2025-05-25 03:50:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:50:57.173415 | orchestrator | 2025-05-25 03:50:57 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:50:57.174438 | orchestrator | 2025-05-25 03:50:57 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:50:57.178622 | orchestrator | 2025-05-25 03:50:57 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:50:57.178949 | orchestrator | 2025-05-25 03:50:57 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:50:57.179754 | orchestrator | 2025-05-25 03:50:57 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:50:57.180453 | orchestrator | 2025-05-25 03:50:57 | INFO  | Task 5f6fb199-9bd9-4dc8-9384-b0da3d2fe9b3 is in state STARTED 2025-05-25 03:50:57.182001 | orchestrator | 2025-05-25 03:50:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:00.224175 | orchestrator | 2025-05-25 03:51:00 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:00.224486 | orchestrator | 2025-05-25 03:51:00 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:00.225473 | orchestrator | 2025-05-25 03:51:00 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:51:00.226211 | orchestrator | 2025-05-25 03:51:00 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:00.227442 | orchestrator | 2025-05-25 03:51:00 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:00.228384 | orchestrator | 2025-05-25 03:51:00 | INFO  | Task 5f6fb199-9bd9-4dc8-9384-b0da3d2fe9b3 is in state STARTED 2025-05-25 03:51:00.228416 | orchestrator | 2025-05-25 03:51:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:03.270251 | orchestrator | 2025-05-25 03:51:03 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:03.271399 | orchestrator | 2025-05-25 03:51:03 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:03.272889 | orchestrator | 2025-05-25 03:51:03 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:51:03.275382 | orchestrator | 2025-05-25 03:51:03 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:03.276662 | orchestrator | 2025-05-25 03:51:03 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:03.277183 | orchestrator | 2025-05-25 03:51:03 | INFO  | Task 5f6fb199-9bd9-4dc8-9384-b0da3d2fe9b3 is in state SUCCESS 2025-05-25 03:51:03.277402 | orchestrator | 2025-05-25 03:51:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:03.278847 | orchestrator | 2025-05-25 03:51:03.278888 | orchestrator | 2025-05-25 03:51:03.278901 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 03:51:03.278913 | orchestrator | 2025-05-25 03:51:03.278924 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 03:51:03.278935 | orchestrator | Sunday 25 May 2025 03:50:39 +0000 (0:00:00.224) 0:00:00.224 ************ 2025-05-25 03:51:03.278946 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:51:03.278959 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:51:03.278970 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:51:03.278980 | orchestrator | 2025-05-25 03:51:03.278992 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 03:51:03.279003 | orchestrator | Sunday 25 May 2025 03:50:39 +0000 (0:00:00.346) 0:00:00.571 ************ 2025-05-25 03:51:03.279014 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-25 03:51:03.279026 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-25 03:51:03.279037 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-25 03:51:03.279047 | orchestrator | 2025-05-25 03:51:03.279066 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-25 03:51:03.279078 | orchestrator | 2025-05-25 03:51:03.279089 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-25 03:51:03.279100 | orchestrator | Sunday 25 May 2025 03:50:40 +0000 (0:00:00.612) 0:00:01.184 ************ 2025-05-25 03:51:03.279137 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:51:03.279149 | orchestrator | 2025-05-25 03:51:03.279161 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-25 03:51:03.279172 | orchestrator | Sunday 25 May 2025 03:50:40 +0000 (0:00:00.689) 0:00:01.873 ************ 2025-05-25 03:51:03.279183 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-25 03:51:03.279194 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-25 03:51:03.279205 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-25 03:51:03.279216 | orchestrator | 2025-05-25 03:51:03.279226 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-25 03:51:03.279237 | orchestrator | Sunday 25 May 2025 03:50:41 +0000 (0:00:00.925) 0:00:02.798 ************ 2025-05-25 03:51:03.279248 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-25 03:51:03.279259 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-25 03:51:03.279270 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-25 03:51:03.279281 | orchestrator | 2025-05-25 03:51:03.279292 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-25 03:51:03.279303 | orchestrator | Sunday 25 May 2025 03:50:43 +0000 (0:00:02.072) 0:00:04.871 ************ 2025-05-25 03:51:03.279313 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:51:03.279324 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:51:03.279335 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:51:03.279346 | orchestrator | 2025-05-25 03:51:03.279357 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-25 03:51:03.279368 | orchestrator | Sunday 25 May 2025 03:50:46 +0000 (0:00:02.299) 0:00:07.171 ************ 2025-05-25 03:51:03.279396 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:51:03.279407 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:51:03.279417 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:51:03.279428 | orchestrator | 2025-05-25 03:51:03.279441 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:51:03.279455 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:51:03.279469 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:51:03.279483 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:51:03.279496 | orchestrator | 2025-05-25 03:51:03.279508 | orchestrator | 2025-05-25 03:51:03.279521 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:51:03.279533 | orchestrator | Sunday 25 May 2025 03:50:48 +0000 (0:00:02.652) 0:00:09.824 ************ 2025-05-25 03:51:03.279545 | orchestrator | =============================================================================== 2025-05-25 03:51:03.279558 | orchestrator | memcached : Restart memcached container --------------------------------- 2.65s 2025-05-25 03:51:03.279570 | orchestrator | memcached : Check memcached container ----------------------------------- 2.30s 2025-05-25 03:51:03.279582 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.07s 2025-05-25 03:51:03.279595 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.93s 2025-05-25 03:51:03.279607 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.69s 2025-05-25 03:51:03.279620 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2025-05-25 03:51:03.279632 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-05-25 03:51:03.279645 | orchestrator | 2025-05-25 03:51:03.279657 | orchestrator | 2025-05-25 03:51:03.279669 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 03:51:03.279682 | orchestrator | 2025-05-25 03:51:03.279694 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 03:51:03.279706 | orchestrator | Sunday 25 May 2025 03:50:38 +0000 (0:00:00.214) 0:00:00.214 ************ 2025-05-25 03:51:03.279719 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:51:03.279731 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:51:03.279744 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:51:03.279756 | orchestrator | 2025-05-25 03:51:03.279769 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 03:51:03.279795 | orchestrator | Sunday 25 May 2025 03:50:38 +0000 (0:00:00.270) 0:00:00.484 ************ 2025-05-25 03:51:03.279806 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-25 03:51:03.279817 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-25 03:51:03.279828 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-25 03:51:03.279839 | orchestrator | 2025-05-25 03:51:03.279849 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-25 03:51:03.279860 | orchestrator | 2025-05-25 03:51:03.279871 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-25 03:51:03.279882 | orchestrator | Sunday 25 May 2025 03:50:39 +0000 (0:00:00.435) 0:00:00.920 ************ 2025-05-25 03:51:03.279892 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:51:03.279903 | orchestrator | 2025-05-25 03:51:03.279914 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-25 03:51:03.279925 | orchestrator | Sunday 25 May 2025 03:50:40 +0000 (0:00:00.715) 0:00:01.635 ************ 2025-05-25 03:51:03.279943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.279968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.279980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.279992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280035 | orchestrator | 2025-05-25 03:51:03.280046 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-25 03:51:03.280057 | orchestrator | Sunday 25 May 2025 03:50:41 +0000 (0:00:01.422) 0:00:03.058 ************ 2025-05-25 03:51:03.280073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280229 | orchestrator | 2025-05-25 03:51:03.280241 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-25 03:51:03.280252 | orchestrator | Sunday 25 May 2025 03:50:44 +0000 (0:00:03.292) 0:00:06.350 ************ 2025-05-25 03:51:03.280269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280346 | orchestrator | 2025-05-25 03:51:03.280362 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-25 03:51:03.280373 | orchestrator | Sunday 25 May 2025 03:50:47 +0000 (0:00:03.075) 0:00:09.427 ************ 2025-05-25 03:51:03.280392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 03:51:03.280465 | orchestrator | 2025-05-25 03:51:03.280476 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-25 03:51:03.280487 | orchestrator | Sunday 25 May 2025 03:50:49 +0000 (0:00:01.909) 0:00:11.336 ************ 2025-05-25 03:51:03.280504 | orchestrator | 2025-05-25 03:51:03.280515 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-25 03:51:03.280532 | orchestrator | Sunday 25 May 2025 03:50:49 +0000 (0:00:00.066) 0:00:11.402 ************ 2025-05-25 03:51:03.280544 | orchestrator | 2025-05-25 03:51:03.280555 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-25 03:51:03.280566 | orchestrator | Sunday 25 May 2025 03:50:49 +0000 (0:00:00.056) 0:00:11.459 ************ 2025-05-25 03:51:03.280577 | orchestrator | 2025-05-25 03:51:03.280588 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-25 03:51:03.280598 | orchestrator | Sunday 25 May 2025 03:50:49 +0000 (0:00:00.138) 0:00:11.597 ************ 2025-05-25 03:51:03.280609 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:51:03.280619 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:51:03.280629 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:51:03.280639 | orchestrator | 2025-05-25 03:51:03.280648 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-25 03:51:03.280658 | orchestrator | Sunday 25 May 2025 03:50:53 +0000 (0:00:03.887) 0:00:15.484 ************ 2025-05-25 03:51:03.280667 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:51:03.280677 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:51:03.280687 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:51:03.280696 | orchestrator | 2025-05-25 03:51:03.280706 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:51:03.280716 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:51:03.280726 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:51:03.280736 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:51:03.280745 | orchestrator | 2025-05-25 03:51:03.280755 | orchestrator | 2025-05-25 03:51:03.280764 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:51:03.280774 | orchestrator | Sunday 25 May 2025 03:51:02 +0000 (0:00:08.522) 0:00:24.007 ************ 2025-05-25 03:51:03.280784 | orchestrator | =============================================================================== 2025-05-25 03:51:03.280793 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.52s 2025-05-25 03:51:03.280804 | orchestrator | redis : Restart redis container ----------------------------------------- 3.89s 2025-05-25 03:51:03.280813 | orchestrator | redis : Copying over default config.json files -------------------------- 3.29s 2025-05-25 03:51:03.280823 | orchestrator | redis : Copying over redis config files --------------------------------- 3.08s 2025-05-25 03:51:03.280832 | orchestrator | redis : Check redis containers ------------------------------------------ 1.91s 2025-05-25 03:51:03.280842 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.42s 2025-05-25 03:51:03.280857 | orchestrator | redis : include_tasks --------------------------------------------------- 0.72s 2025-05-25 03:51:03.280867 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-05-25 03:51:03.280877 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2025-05-25 03:51:03.280887 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.26s 2025-05-25 03:51:06.327801 | orchestrator | 2025-05-25 03:51:06 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:06.330511 | orchestrator | 2025-05-25 03:51:06 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:06.332565 | orchestrator | 2025-05-25 03:51:06 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:51:06.334453 | orchestrator | 2025-05-25 03:51:06 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:06.336358 | orchestrator | 2025-05-25 03:51:06 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:06.336603 | orchestrator | 2025-05-25 03:51:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:09.379177 | orchestrator | 2025-05-25 03:51:09 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:09.379357 | orchestrator | 2025-05-25 03:51:09 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:09.380206 | orchestrator | 2025-05-25 03:51:09 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:51:09.380970 | orchestrator | 2025-05-25 03:51:09 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:09.382469 | orchestrator | 2025-05-25 03:51:09 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:09.382632 | orchestrator | 2025-05-25 03:51:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:12.437680 | orchestrator | 2025-05-25 03:51:12 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:12.437992 | orchestrator | 2025-05-25 03:51:12 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:12.443770 | orchestrator | 2025-05-25 03:51:12 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:51:12.445808 | orchestrator | 2025-05-25 03:51:12 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:12.446179 | orchestrator | 2025-05-25 03:51:12 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:12.446259 | orchestrator | 2025-05-25 03:51:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:15.491410 | orchestrator | 2025-05-25 03:51:15 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:15.495362 | orchestrator | 2025-05-25 03:51:15 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:15.497201 | orchestrator | 2025-05-25 03:51:15 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:51:15.497999 | orchestrator | 2025-05-25 03:51:15 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:15.498765 | orchestrator | 2025-05-25 03:51:15 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:15.498796 | orchestrator | 2025-05-25 03:51:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:18.528641 | orchestrator | 2025-05-25 03:51:18 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:18.528760 | orchestrator | 2025-05-25 03:51:18 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:18.529291 | orchestrator | 2025-05-25 03:51:18 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:51:18.529792 | orchestrator | 2025-05-25 03:51:18 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:18.530558 | orchestrator | 2025-05-25 03:51:18 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:18.530790 | orchestrator | 2025-05-25 03:51:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:21.562620 | orchestrator | 2025-05-25 03:51:21 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:21.563447 | orchestrator | 2025-05-25 03:51:21 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:21.564714 | orchestrator | 2025-05-25 03:51:21 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:51:21.566221 | orchestrator | 2025-05-25 03:51:21 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:21.567149 | orchestrator | 2025-05-25 03:51:21 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:21.567213 | orchestrator | 2025-05-25 03:51:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:24.614693 | orchestrator | 2025-05-25 03:51:24 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:24.614958 | orchestrator | 2025-05-25 03:51:24 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:24.615929 | orchestrator | 2025-05-25 03:51:24 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:51:24.616988 | orchestrator | 2025-05-25 03:51:24 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:24.617928 | orchestrator | 2025-05-25 03:51:24 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:24.618105 | orchestrator | 2025-05-25 03:51:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:27.655427 | orchestrator | 2025-05-25 03:51:27 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:27.657197 | orchestrator | 2025-05-25 03:51:27 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:27.658054 | orchestrator | 2025-05-25 03:51:27 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:51:27.660157 | orchestrator | 2025-05-25 03:51:27 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:27.662374 | orchestrator | 2025-05-25 03:51:27 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:27.662464 | orchestrator | 2025-05-25 03:51:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:30.716093 | orchestrator | 2025-05-25 03:51:30 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:30.716609 | orchestrator | 2025-05-25 03:51:30 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:30.717794 | orchestrator | 2025-05-25 03:51:30 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:51:30.718798 | orchestrator | 2025-05-25 03:51:30 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:30.719654 | orchestrator | 2025-05-25 03:51:30 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:30.719671 | orchestrator | 2025-05-25 03:51:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:33.755505 | orchestrator | 2025-05-25 03:51:33 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:33.755616 | orchestrator | 2025-05-25 03:51:33 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:33.756600 | orchestrator | 2025-05-25 03:51:33 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:51:33.757431 | orchestrator | 2025-05-25 03:51:33 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:33.758218 | orchestrator | 2025-05-25 03:51:33 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:33.758260 | orchestrator | 2025-05-25 03:51:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:36.808852 | orchestrator | 2025-05-25 03:51:36 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:36.808977 | orchestrator | 2025-05-25 03:51:36 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:36.809000 | orchestrator | 2025-05-25 03:51:36 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:51:36.809019 | orchestrator | 2025-05-25 03:51:36 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:36.809644 | orchestrator | 2025-05-25 03:51:36 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:36.809688 | orchestrator | 2025-05-25 03:51:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:39.849721 | orchestrator | 2025-05-25 03:51:39 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:39.851007 | orchestrator | 2025-05-25 03:51:39 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:39.854562 | orchestrator | 2025-05-25 03:51:39 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:51:39.854598 | orchestrator | 2025-05-25 03:51:39 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:39.857900 | orchestrator | 2025-05-25 03:51:39 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:39.857935 | orchestrator | 2025-05-25 03:51:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:42.909022 | orchestrator | 2025-05-25 03:51:42 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:42.911330 | orchestrator | 2025-05-25 03:51:42 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:42.913512 | orchestrator | 2025-05-25 03:51:42 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state STARTED 2025-05-25 03:51:42.915180 | orchestrator | 2025-05-25 03:51:42 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:42.917403 | orchestrator | 2025-05-25 03:51:42 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:42.917432 | orchestrator | 2025-05-25 03:51:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:45.960674 | orchestrator | 2025-05-25 03:51:45 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:45.961457 | orchestrator | 2025-05-25 03:51:45 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:45.962899 | orchestrator | 2025-05-25 03:51:45 | INFO  | Task ac037253-5955-4a57-b3dc-0e762bea6c80 is in state SUCCESS 2025-05-25 03:51:45.964934 | orchestrator | 2025-05-25 03:51:45.964980 | orchestrator | 2025-05-25 03:51:45.964993 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 03:51:45.965006 | orchestrator | 2025-05-25 03:51:45.965017 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 03:51:45.965029 | orchestrator | Sunday 25 May 2025 03:50:39 +0000 (0:00:00.295) 0:00:00.295 ************ 2025-05-25 03:51:45.965040 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:51:45.965052 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:51:45.965063 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:51:45.965074 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:51:45.965085 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:51:45.965095 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:51:45.965146 | orchestrator | 2025-05-25 03:51:45.965161 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 03:51:45.965173 | orchestrator | Sunday 25 May 2025 03:50:40 +0000 (0:00:01.089) 0:00:01.385 ************ 2025-05-25 03:51:45.965206 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-25 03:51:45.965218 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-25 03:51:45.965229 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-25 03:51:45.965239 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-25 03:51:45.965250 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-25 03:51:45.965260 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-25 03:51:45.965271 | orchestrator | 2025-05-25 03:51:45.965282 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-25 03:51:45.965292 | orchestrator | 2025-05-25 03:51:45.965310 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-25 03:51:45.965321 | orchestrator | Sunday 25 May 2025 03:50:41 +0000 (0:00:00.976) 0:00:02.361 ************ 2025-05-25 03:51:45.965333 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:51:45.965345 | orchestrator | 2025-05-25 03:51:45.965355 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-25 03:51:45.965366 | orchestrator | Sunday 25 May 2025 03:50:43 +0000 (0:00:01.898) 0:00:04.260 ************ 2025-05-25 03:51:45.965377 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-25 03:51:45.965388 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-25 03:51:45.965399 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-25 03:51:45.965409 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-25 03:51:45.965420 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-25 03:51:45.965430 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-25 03:51:45.965441 | orchestrator | 2025-05-25 03:51:45.965451 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-25 03:51:45.965462 | orchestrator | Sunday 25 May 2025 03:50:45 +0000 (0:00:01.912) 0:00:06.172 ************ 2025-05-25 03:51:45.965473 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-25 03:51:45.965483 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-25 03:51:45.965494 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-25 03:51:45.965505 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-25 03:51:45.965518 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-25 03:51:45.965530 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-25 03:51:45.965541 | orchestrator | 2025-05-25 03:51:45.965553 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-25 03:51:45.965565 | orchestrator | Sunday 25 May 2025 03:50:47 +0000 (0:00:02.048) 0:00:08.221 ************ 2025-05-25 03:51:45.965577 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-25 03:51:45.965589 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:51:45.965602 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-25 03:51:45.965614 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:51:45.965626 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-25 03:51:45.965638 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:51:45.965651 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-25 03:51:45.965663 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:51:45.965675 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-25 03:51:45.965687 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:51:45.965698 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-25 03:51:45.965711 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:51:45.965722 | orchestrator | 2025-05-25 03:51:45.965742 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-25 03:51:45.965755 | orchestrator | Sunday 25 May 2025 03:50:48 +0000 (0:00:01.603) 0:00:09.824 ************ 2025-05-25 03:51:45.965767 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:51:45.965779 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:51:45.965791 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:51:45.965803 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:51:45.965815 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:51:45.965827 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:51:45.965839 | orchestrator | 2025-05-25 03:51:45.965851 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-25 03:51:45.965863 | orchestrator | Sunday 25 May 2025 03:50:49 +0000 (0:00:01.087) 0:00:10.912 ************ 2025-05-25 03:51:45.965894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.965910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.965926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.965938 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.965950 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966219 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966237 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966255 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966267 | orchestrator | 2025-05-25 03:51:45.966278 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-25 03:51:45.966290 | orchestrator | Sunday 25 May 2025 03:50:52 +0000 (0:00:02.225) 0:00:13.138 ************ 2025-05-25 03:51:45.966301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966341 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966359 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966418 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966448 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966459 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966479 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966491 | orchestrator | 2025-05-25 03:51:45.966502 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-25 03:51:45.966513 | orchestrator | Sunday 25 May 2025 03:50:55 +0000 (0:00:03.507) 0:00:16.645 ************ 2025-05-25 03:51:45.966524 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:51:45.966535 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:51:45.966546 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:51:45.966556 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:51:45.966567 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:51:45.966577 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:51:45.966588 | orchestrator | 2025-05-25 03:51:45.966599 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-25 03:51:45.966609 | orchestrator | Sunday 25 May 2025 03:50:56 +0000 (0:00:01.335) 0:00:17.981 ************ 2025-05-25 03:51:45.966625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966669 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966688 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966699 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966817 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 03:51:45.966828 | orchestrator | 2025-05-25 03:51:45.966839 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-25 03:51:45.966850 | orchestrator | Sunday 25 May 2025 03:50:59 +0000 (0:00:02.299) 0:00:20.280 ************ 2025-05-25 03:51:45.966861 | orchestrator | 2025-05-25 03:51:45.966872 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-25 03:51:45.966889 | orchestrator | Sunday 25 May 2025 03:50:59 +0000 (0:00:00.128) 0:00:20.409 ************ 2025-05-25 03:51:45.966900 | orchestrator | 2025-05-25 03:51:45.966911 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-25 03:51:45.966921 | orchestrator | Sunday 25 May 2025 03:50:59 +0000 (0:00:00.155) 0:00:20.565 ************ 2025-05-25 03:51:45.966932 | orchestrator | 2025-05-25 03:51:45.966943 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-25 03:51:45.966954 | orchestrator | Sunday 25 May 2025 03:50:59 +0000 (0:00:00.290) 0:00:20.855 ************ 2025-05-25 03:51:45.966964 | orchestrator | 2025-05-25 03:51:45.966975 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-25 03:51:45.966985 | orchestrator | Sunday 25 May 2025 03:51:00 +0000 (0:00:00.305) 0:00:21.161 ************ 2025-05-25 03:51:45.966996 | orchestrator | 2025-05-25 03:51:45.967007 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-25 03:51:45.967017 | orchestrator | Sunday 25 May 2025 03:51:00 +0000 (0:00:00.404) 0:00:21.565 ************ 2025-05-25 03:51:45.967028 | orchestrator | 2025-05-25 03:51:45.967038 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-25 03:51:45.967049 | orchestrator | Sunday 25 May 2025 03:51:00 +0000 (0:00:00.404) 0:00:21.970 ************ 2025-05-25 03:51:45.967060 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:51:45.967070 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:51:45.967081 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:51:45.967091 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:51:45.967102 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:51:45.967188 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:51:45.967200 | orchestrator | 2025-05-25 03:51:45.967210 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-25 03:51:45.967221 | orchestrator | Sunday 25 May 2025 03:51:12 +0000 (0:00:11.961) 0:00:33.932 ************ 2025-05-25 03:51:45.967232 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:51:45.967243 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:51:45.967253 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:51:45.967264 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:51:45.967274 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:51:45.967284 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:51:45.967293 | orchestrator | 2025-05-25 03:51:45.967303 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-25 03:51:45.967312 | orchestrator | Sunday 25 May 2025 03:51:15 +0000 (0:00:02.219) 0:00:36.151 ************ 2025-05-25 03:51:45.967322 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:51:45.967331 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:51:45.967340 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:51:45.967350 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:51:45.967359 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:51:45.967368 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:51:45.967378 | orchestrator | 2025-05-25 03:51:45.967387 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-25 03:51:45.967397 | orchestrator | Sunday 25 May 2025 03:51:23 +0000 (0:00:08.601) 0:00:44.752 ************ 2025-05-25 03:51:45.967406 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-25 03:51:45.967416 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-25 03:51:45.967426 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-25 03:51:45.967435 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-25 03:51:45.967445 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-25 03:51:45.967461 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-25 03:51:45.967478 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-25 03:51:45.967488 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-25 03:51:45.967497 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-25 03:51:45.967507 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-25 03:51:45.967516 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-25 03:51:45.967532 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-25 03:51:45.967542 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-25 03:51:45.967552 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-25 03:51:45.967561 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-25 03:51:45.967570 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-25 03:51:45.967584 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-25 03:51:45.967594 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-25 03:51:45.967603 | orchestrator | 2025-05-25 03:51:45.967613 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-25 03:51:45.967623 | orchestrator | Sunday 25 May 2025 03:51:30 +0000 (0:00:07.170) 0:00:51.922 ************ 2025-05-25 03:51:45.967632 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-25 03:51:45.967642 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:51:45.967651 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-25 03:51:45.967661 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:51:45.967670 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-25 03:51:45.967680 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:51:45.967689 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-25 03:51:45.967698 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-25 03:51:45.967708 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-25 03:51:45.967717 | orchestrator | 2025-05-25 03:51:45.967727 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-25 03:51:45.967736 | orchestrator | Sunday 25 May 2025 03:51:33 +0000 (0:00:02.122) 0:00:54.045 ************ 2025-05-25 03:51:45.967746 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-25 03:51:45.967756 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:51:45.967765 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-25 03:51:45.967775 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:51:45.967784 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-25 03:51:45.967794 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:51:45.967803 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-25 03:51:45.967813 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-25 03:51:45.967822 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-25 03:51:45.967831 | orchestrator | 2025-05-25 03:51:45.967841 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-25 03:51:45.967851 | orchestrator | Sunday 25 May 2025 03:51:36 +0000 (0:00:03.505) 0:00:57.550 ************ 2025-05-25 03:51:45.967866 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:51:45.967876 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:51:45.967885 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:51:45.967894 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:51:45.967904 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:51:45.967913 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:51:45.967923 | orchestrator | 2025-05-25 03:51:45.967933 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:51:45.967942 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 03:51:45.967953 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 03:51:45.967962 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 03:51:45.967972 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 03:51:45.967982 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 03:51:45.967997 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 03:51:45.968007 | orchestrator | 2025-05-25 03:51:45.968017 | orchestrator | 2025-05-25 03:51:45.968027 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:51:45.968037 | orchestrator | Sunday 25 May 2025 03:51:44 +0000 (0:00:07.983) 0:01:05.533 ************ 2025-05-25 03:51:45.968047 | orchestrator | =============================================================================== 2025-05-25 03:51:45.968056 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 16.58s 2025-05-25 03:51:45.968066 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.96s 2025-05-25 03:51:45.968075 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.17s 2025-05-25 03:51:45.968084 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.51s 2025-05-25 03:51:45.968094 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.51s 2025-05-25 03:51:45.968103 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.30s 2025-05-25 03:51:45.968139 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.23s 2025-05-25 03:51:45.968156 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.22s 2025-05-25 03:51:45.968171 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.12s 2025-05-25 03:51:45.968188 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.05s 2025-05-25 03:51:45.968199 | orchestrator | module-load : Load modules ---------------------------------------------- 1.91s 2025-05-25 03:51:45.968213 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.90s 2025-05-25 03:51:45.968223 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.69s 2025-05-25 03:51:45.968232 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.60s 2025-05-25 03:51:45.968242 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.34s 2025-05-25 03:51:45.968251 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.09s 2025-05-25 03:51:45.968261 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.09s 2025-05-25 03:51:45.968270 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.98s 2025-05-25 03:51:45.968280 | orchestrator | 2025-05-25 03:51:45 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:45.968296 | orchestrator | 2025-05-25 03:51:45 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:45.968306 | orchestrator | 2025-05-25 03:51:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:49.011092 | orchestrator | 2025-05-25 03:51:49 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:49.011935 | orchestrator | 2025-05-25 03:51:49 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:51:49.013354 | orchestrator | 2025-05-25 03:51:49 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:49.014252 | orchestrator | 2025-05-25 03:51:49 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:49.016913 | orchestrator | 2025-05-25 03:51:49 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:49.017883 | orchestrator | 2025-05-25 03:51:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:52.047916 | orchestrator | 2025-05-25 03:51:52 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:52.049439 | orchestrator | 2025-05-25 03:51:52 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:51:52.050142 | orchestrator | 2025-05-25 03:51:52 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:52.050840 | orchestrator | 2025-05-25 03:51:52 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:52.051612 | orchestrator | 2025-05-25 03:51:52 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:52.051763 | orchestrator | 2025-05-25 03:51:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:55.092142 | orchestrator | 2025-05-25 03:51:55 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:55.092698 | orchestrator | 2025-05-25 03:51:55 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:51:55.093681 | orchestrator | 2025-05-25 03:51:55 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:55.094497 | orchestrator | 2025-05-25 03:51:55 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:55.095633 | orchestrator | 2025-05-25 03:51:55 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:55.095664 | orchestrator | 2025-05-25 03:51:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:51:58.130606 | orchestrator | 2025-05-25 03:51:58 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:51:58.131184 | orchestrator | 2025-05-25 03:51:58 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:51:58.133005 | orchestrator | 2025-05-25 03:51:58 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:51:58.134781 | orchestrator | 2025-05-25 03:51:58 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:51:58.136285 | orchestrator | 2025-05-25 03:51:58 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:51:58.136326 | orchestrator | 2025-05-25 03:51:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:01.185083 | orchestrator | 2025-05-25 03:52:01 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:01.187103 | orchestrator | 2025-05-25 03:52:01 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:01.192476 | orchestrator | 2025-05-25 03:52:01 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:01.193608 | orchestrator | 2025-05-25 03:52:01 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:01.196719 | orchestrator | 2025-05-25 03:52:01 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:01.197727 | orchestrator | 2025-05-25 03:52:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:04.237854 | orchestrator | 2025-05-25 03:52:04 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:04.237941 | orchestrator | 2025-05-25 03:52:04 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:04.237950 | orchestrator | 2025-05-25 03:52:04 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:04.238721 | orchestrator | 2025-05-25 03:52:04 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:04.240307 | orchestrator | 2025-05-25 03:52:04 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:04.240345 | orchestrator | 2025-05-25 03:52:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:07.287272 | orchestrator | 2025-05-25 03:52:07 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:07.288842 | orchestrator | 2025-05-25 03:52:07 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:07.290712 | orchestrator | 2025-05-25 03:52:07 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:07.292065 | orchestrator | 2025-05-25 03:52:07 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:07.293642 | orchestrator | 2025-05-25 03:52:07 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:07.293663 | orchestrator | 2025-05-25 03:52:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:10.338970 | orchestrator | 2025-05-25 03:52:10 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:10.341249 | orchestrator | 2025-05-25 03:52:10 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:10.342698 | orchestrator | 2025-05-25 03:52:10 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:10.345279 | orchestrator | 2025-05-25 03:52:10 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:10.346306 | orchestrator | 2025-05-25 03:52:10 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:10.346655 | orchestrator | 2025-05-25 03:52:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:13.394552 | orchestrator | 2025-05-25 03:52:13 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:13.394639 | orchestrator | 2025-05-25 03:52:13 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:13.394648 | orchestrator | 2025-05-25 03:52:13 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:13.394655 | orchestrator | 2025-05-25 03:52:13 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:13.395552 | orchestrator | 2025-05-25 03:52:13 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:13.395584 | orchestrator | 2025-05-25 03:52:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:16.435677 | orchestrator | 2025-05-25 03:52:16 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:16.435938 | orchestrator | 2025-05-25 03:52:16 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:16.437740 | orchestrator | 2025-05-25 03:52:16 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:16.442478 | orchestrator | 2025-05-25 03:52:16 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:16.446184 | orchestrator | 2025-05-25 03:52:16 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:16.446266 | orchestrator | 2025-05-25 03:52:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:19.492525 | orchestrator | 2025-05-25 03:52:19 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:19.493028 | orchestrator | 2025-05-25 03:52:19 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:19.495642 | orchestrator | 2025-05-25 03:52:19 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:19.499305 | orchestrator | 2025-05-25 03:52:19 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:19.500390 | orchestrator | 2025-05-25 03:52:19 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:19.500425 | orchestrator | 2025-05-25 03:52:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:22.536346 | orchestrator | 2025-05-25 03:52:22 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:22.537429 | orchestrator | 2025-05-25 03:52:22 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:22.540250 | orchestrator | 2025-05-25 03:52:22 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:22.540985 | orchestrator | 2025-05-25 03:52:22 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:22.542313 | orchestrator | 2025-05-25 03:52:22 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:22.542391 | orchestrator | 2025-05-25 03:52:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:25.615629 | orchestrator | 2025-05-25 03:52:25 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:25.619089 | orchestrator | 2025-05-25 03:52:25 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:25.624837 | orchestrator | 2025-05-25 03:52:25 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:25.628658 | orchestrator | 2025-05-25 03:52:25 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:25.632437 | orchestrator | 2025-05-25 03:52:25 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:25.633212 | orchestrator | 2025-05-25 03:52:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:28.678613 | orchestrator | 2025-05-25 03:52:28 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:28.681622 | orchestrator | 2025-05-25 03:52:28 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:28.684636 | orchestrator | 2025-05-25 03:52:28 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:28.686964 | orchestrator | 2025-05-25 03:52:28 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:28.689487 | orchestrator | 2025-05-25 03:52:28 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:28.689608 | orchestrator | 2025-05-25 03:52:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:31.750522 | orchestrator | 2025-05-25 03:52:31 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:31.751983 | orchestrator | 2025-05-25 03:52:31 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:31.754205 | orchestrator | 2025-05-25 03:52:31 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:31.756935 | orchestrator | 2025-05-25 03:52:31 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:31.758238 | orchestrator | 2025-05-25 03:52:31 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:31.758352 | orchestrator | 2025-05-25 03:52:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:34.813014 | orchestrator | 2025-05-25 03:52:34 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:34.814489 | orchestrator | 2025-05-25 03:52:34 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:34.816378 | orchestrator | 2025-05-25 03:52:34 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:34.817729 | orchestrator | 2025-05-25 03:52:34 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:34.819587 | orchestrator | 2025-05-25 03:52:34 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:34.819629 | orchestrator | 2025-05-25 03:52:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:37.856553 | orchestrator | 2025-05-25 03:52:37 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:37.857343 | orchestrator | 2025-05-25 03:52:37 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:37.860225 | orchestrator | 2025-05-25 03:52:37 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:37.862753 | orchestrator | 2025-05-25 03:52:37 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:37.864297 | orchestrator | 2025-05-25 03:52:37 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:37.864554 | orchestrator | 2025-05-25 03:52:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:40.915657 | orchestrator | 2025-05-25 03:52:40 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:40.917926 | orchestrator | 2025-05-25 03:52:40 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:40.919144 | orchestrator | 2025-05-25 03:52:40 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:40.920621 | orchestrator | 2025-05-25 03:52:40 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:40.922212 | orchestrator | 2025-05-25 03:52:40 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:40.922353 | orchestrator | 2025-05-25 03:52:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:43.973687 | orchestrator | 2025-05-25 03:52:43 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:43.974098 | orchestrator | 2025-05-25 03:52:43 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:43.977843 | orchestrator | 2025-05-25 03:52:43 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:43.978471 | orchestrator | 2025-05-25 03:52:43 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:43.979432 | orchestrator | 2025-05-25 03:52:43 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:43.979547 | orchestrator | 2025-05-25 03:52:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:47.024600 | orchestrator | 2025-05-25 03:52:47 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:47.024695 | orchestrator | 2025-05-25 03:52:47 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:47.026303 | orchestrator | 2025-05-25 03:52:47 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:47.028199 | orchestrator | 2025-05-25 03:52:47 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:47.031710 | orchestrator | 2025-05-25 03:52:47 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:47.031771 | orchestrator | 2025-05-25 03:52:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:50.068926 | orchestrator | 2025-05-25 03:52:50 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:50.069246 | orchestrator | 2025-05-25 03:52:50 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:50.069802 | orchestrator | 2025-05-25 03:52:50 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:50.070414 | orchestrator | 2025-05-25 03:52:50 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:50.071050 | orchestrator | 2025-05-25 03:52:50 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:50.071148 | orchestrator | 2025-05-25 03:52:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:53.108692 | orchestrator | 2025-05-25 03:52:53 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state STARTED 2025-05-25 03:52:53.109206 | orchestrator | 2025-05-25 03:52:53 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:53.109815 | orchestrator | 2025-05-25 03:52:53 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:53.110634 | orchestrator | 2025-05-25 03:52:53 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:53.111480 | orchestrator | 2025-05-25 03:52:53 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:53.111506 | orchestrator | 2025-05-25 03:52:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:56.146606 | orchestrator | 2025-05-25 03:52:56 | INFO  | Task f1699c70-0b05-422e-b6a2-8662964871e5 is in state SUCCESS 2025-05-25 03:52:56.147683 | orchestrator | 2025-05-25 03:52:56.147770 | orchestrator | 2025-05-25 03:52:56.147802 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-05-25 03:52:56.147816 | orchestrator | 2025-05-25 03:52:56.147828 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-05-25 03:52:56.147841 | orchestrator | Sunday 25 May 2025 03:47:56 +0000 (0:00:00.192) 0:00:00.192 ************ 2025-05-25 03:52:56.147853 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:52:56.147867 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:52:56.147879 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:52:56.147891 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:52:56.147903 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:52:56.147915 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:52:56.147947 | orchestrator | 2025-05-25 03:52:56.147960 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-05-25 03:52:56.147972 | orchestrator | Sunday 25 May 2025 03:47:57 +0000 (0:00:00.819) 0:00:01.012 ************ 2025-05-25 03:52:56.147984 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:52:56.147997 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:52:56.148008 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:52:56.148020 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.148032 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.148043 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.148055 | orchestrator | 2025-05-25 03:52:56.148067 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-05-25 03:52:56.148079 | orchestrator | Sunday 25 May 2025 03:47:58 +0000 (0:00:00.784) 0:00:01.797 ************ 2025-05-25 03:52:56.148091 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:52:56.148129 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:52:56.148143 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:52:56.148153 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.148164 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.148175 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.148186 | orchestrator | 2025-05-25 03:52:56.148196 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-05-25 03:52:56.148208 | orchestrator | Sunday 25 May 2025 03:47:59 +0000 (0:00:00.907) 0:00:02.704 ************ 2025-05-25 03:52:56.148221 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:52:56.148233 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:52:56.148245 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:52:56.148257 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:52:56.148270 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:52:56.148282 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:52:56.148294 | orchestrator | 2025-05-25 03:52:56.148306 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-05-25 03:52:56.148319 | orchestrator | Sunday 25 May 2025 03:48:01 +0000 (0:00:01.895) 0:00:04.600 ************ 2025-05-25 03:52:56.148331 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:52:56.148344 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:52:56.148356 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:52:56.148369 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:52:56.148381 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:52:56.148394 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:52:56.148406 | orchestrator | 2025-05-25 03:52:56.148418 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-05-25 03:52:56.148463 | orchestrator | Sunday 25 May 2025 03:48:02 +0000 (0:00:01.051) 0:00:05.652 ************ 2025-05-25 03:52:56.148477 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:52:56.148489 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:52:56.148501 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:52:56.148513 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:52:56.148526 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:52:56.148538 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:52:56.148551 | orchestrator | 2025-05-25 03:52:56.148562 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-05-25 03:52:56.148573 | orchestrator | Sunday 25 May 2025 03:48:03 +0000 (0:00:01.096) 0:00:06.748 ************ 2025-05-25 03:52:56.148584 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:52:56.148595 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:52:56.148605 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:52:56.148616 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.148626 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.148637 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.148648 | orchestrator | 2025-05-25 03:52:56.148659 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-05-25 03:52:56.148669 | orchestrator | Sunday 25 May 2025 03:48:04 +0000 (0:00:00.766) 0:00:07.514 ************ 2025-05-25 03:52:56.148689 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:52:56.148700 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:52:56.148710 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:52:56.148721 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.148731 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.148742 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.148752 | orchestrator | 2025-05-25 03:52:56.148763 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-05-25 03:52:56.148774 | orchestrator | Sunday 25 May 2025 03:48:04 +0000 (0:00:00.640) 0:00:08.155 ************ 2025-05-25 03:52:56.148785 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-25 03:52:56.148796 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-25 03:52:56.148806 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:52:56.148817 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-25 03:52:56.148828 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-25 03:52:56.148839 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:52:56.148849 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-25 03:52:56.148860 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-25 03:52:56.148871 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:52:56.148882 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-25 03:52:56.148908 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-25 03:52:56.148920 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.148938 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-25 03:52:56.148949 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-25 03:52:56.148960 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.148970 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-25 03:52:56.148981 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-25 03:52:56.148992 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.149003 | orchestrator | 2025-05-25 03:52:56.149013 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-05-25 03:52:56.149024 | orchestrator | Sunday 25 May 2025 03:48:05 +0000 (0:00:00.956) 0:00:09.111 ************ 2025-05-25 03:52:56.149035 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:52:56.149046 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:52:56.149057 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:52:56.149067 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.149078 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.149088 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.149099 | orchestrator | 2025-05-25 03:52:56.149180 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-05-25 03:52:56.149194 | orchestrator | Sunday 25 May 2025 03:48:07 +0000 (0:00:01.393) 0:00:10.505 ************ 2025-05-25 03:52:56.149205 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:52:56.149216 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:52:56.149226 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:52:56.149237 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:52:56.149248 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:52:56.149258 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:52:56.149269 | orchestrator | 2025-05-25 03:52:56.149280 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-05-25 03:52:56.149291 | orchestrator | Sunday 25 May 2025 03:48:07 +0000 (0:00:00.549) 0:00:11.055 ************ 2025-05-25 03:52:56.149301 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:52:56.149312 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:52:56.149331 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:52:56.149341 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:52:56.149352 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:52:56.149363 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:52:56.149374 | orchestrator | 2025-05-25 03:52:56.149384 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-05-25 03:52:56.149395 | orchestrator | Sunday 25 May 2025 03:48:13 +0000 (0:00:06.183) 0:00:17.238 ************ 2025-05-25 03:52:56.149406 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:52:56.149417 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:52:56.149428 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:52:56.149439 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.149449 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.149460 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.149470 | orchestrator | 2025-05-25 03:52:56.149481 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-05-25 03:52:56.149492 | orchestrator | Sunday 25 May 2025 03:48:14 +0000 (0:00:01.014) 0:00:18.252 ************ 2025-05-25 03:52:56.149503 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:52:56.149514 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:52:56.149524 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:52:56.149534 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.149544 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.149553 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.149562 | orchestrator | 2025-05-25 03:52:56.149572 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-05-25 03:52:56.149583 | orchestrator | Sunday 25 May 2025 03:48:16 +0000 (0:00:01.897) 0:00:20.150 ************ 2025-05-25 03:52:56.149593 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:52:56.149602 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:52:56.149612 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:52:56.149621 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.149631 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.149640 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.149650 | orchestrator | 2025-05-25 03:52:56.149659 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-05-25 03:52:56.149669 | orchestrator | Sunday 25 May 2025 03:48:17 +0000 (0:00:00.756) 0:00:20.907 ************ 2025-05-25 03:52:56.149679 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-05-25 03:52:56.149689 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-05-25 03:52:56.149698 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:52:56.149708 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-05-25 03:52:56.149717 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-05-25 03:52:56.149727 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:52:56.149736 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-05-25 03:52:56.149746 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-05-25 03:52:56.149756 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:52:56.149765 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-05-25 03:52:56.149774 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-05-25 03:52:56.149784 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.149793 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-05-25 03:52:56.149803 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-05-25 03:52:56.149812 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.149822 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-05-25 03:52:56.149831 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-05-25 03:52:56.149841 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.149883 | orchestrator | 2025-05-25 03:52:56.149893 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-05-25 03:52:56.149917 | orchestrator | Sunday 25 May 2025 03:48:18 +0000 (0:00:01.198) 0:00:22.105 ************ 2025-05-25 03:52:56.149932 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:52:56.149943 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:52:56.149952 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:52:56.149962 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.149971 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.149981 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.149991 | orchestrator | 2025-05-25 03:52:56.150000 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-05-25 03:52:56.150010 | orchestrator | 2025-05-25 03:52:56.150078 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-05-25 03:52:56.150089 | orchestrator | Sunday 25 May 2025 03:48:20 +0000 (0:00:01.597) 0:00:23.703 ************ 2025-05-25 03:52:56.150099 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:52:56.150137 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:52:56.150154 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:52:56.150172 | orchestrator | 2025-05-25 03:52:56.150189 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-05-25 03:52:56.150206 | orchestrator | Sunday 25 May 2025 03:48:22 +0000 (0:00:01.882) 0:00:25.586 ************ 2025-05-25 03:52:56.150217 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:52:56.150226 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:52:56.150236 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:52:56.150246 | orchestrator | 2025-05-25 03:52:56.150255 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-05-25 03:52:56.150265 | orchestrator | Sunday 25 May 2025 03:48:23 +0000 (0:00:01.051) 0:00:26.638 ************ 2025-05-25 03:52:56.150274 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:52:56.150284 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:52:56.150294 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:52:56.150303 | orchestrator | 2025-05-25 03:52:56.150313 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-05-25 03:52:56.150322 | orchestrator | Sunday 25 May 2025 03:48:24 +0000 (0:00:01.058) 0:00:27.697 ************ 2025-05-25 03:52:56.150332 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:52:56.150341 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:52:56.150351 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:52:56.150360 | orchestrator | 2025-05-25 03:52:56.150370 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-05-25 03:52:56.150380 | orchestrator | Sunday 25 May 2025 03:48:25 +0000 (0:00:00.856) 0:00:28.553 ************ 2025-05-25 03:52:56.150389 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.150399 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.150409 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.150418 | orchestrator | 2025-05-25 03:52:56.150428 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-05-25 03:52:56.150437 | orchestrator | Sunday 25 May 2025 03:48:25 +0000 (0:00:00.392) 0:00:28.946 ************ 2025-05-25 03:52:56.150447 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:52:56.150456 | orchestrator | 2025-05-25 03:52:56.150466 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-05-25 03:52:56.150476 | orchestrator | Sunday 25 May 2025 03:48:26 +0000 (0:00:00.850) 0:00:29.797 ************ 2025-05-25 03:52:56.150485 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:52:56.150495 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:52:56.150508 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:52:56.150579 | orchestrator | 2025-05-25 03:52:56.150592 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-05-25 03:52:56.150602 | orchestrator | Sunday 25 May 2025 03:48:28 +0000 (0:00:02.174) 0:00:31.971 ************ 2025-05-25 03:52:56.150612 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.150622 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.150641 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:52:56.150651 | orchestrator | 2025-05-25 03:52:56.150661 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-05-25 03:52:56.150670 | orchestrator | Sunday 25 May 2025 03:48:29 +0000 (0:00:00.899) 0:00:32.871 ************ 2025-05-25 03:52:56.150680 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.150689 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.150699 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:52:56.150733 | orchestrator | 2025-05-25 03:52:56.150744 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-05-25 03:52:56.150754 | orchestrator | Sunday 25 May 2025 03:48:30 +0000 (0:00:01.023) 0:00:33.895 ************ 2025-05-25 03:52:56.150764 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.150773 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.150783 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:52:56.150793 | orchestrator | 2025-05-25 03:52:56.150802 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-05-25 03:52:56.150812 | orchestrator | Sunday 25 May 2025 03:48:32 +0000 (0:00:01.929) 0:00:35.824 ************ 2025-05-25 03:52:56.150822 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.150831 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.150841 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.150850 | orchestrator | 2025-05-25 03:52:56.150860 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-05-25 03:52:56.150869 | orchestrator | Sunday 25 May 2025 03:48:32 +0000 (0:00:00.354) 0:00:36.179 ************ 2025-05-25 03:52:56.150879 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.150888 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.150898 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.150907 | orchestrator | 2025-05-25 03:52:56.150917 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-05-25 03:52:56.150927 | orchestrator | Sunday 25 May 2025 03:48:33 +0000 (0:00:00.516) 0:00:36.695 ************ 2025-05-25 03:52:56.150936 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:52:56.150946 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:52:56.150955 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:52:56.150965 | orchestrator | 2025-05-25 03:52:56.150975 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-05-25 03:52:56.150985 | orchestrator | Sunday 25 May 2025 03:48:35 +0000 (0:00:01.774) 0:00:38.470 ************ 2025-05-25 03:52:56.151008 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-25 03:52:56.151019 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-25 03:52:56.151028 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-25 03:52:56.151038 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-25 03:52:56.151048 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-25 03:52:56.151058 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-25 03:52:56.151068 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-25 03:52:56.151077 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-25 03:52:56.151087 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-25 03:52:56.151121 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-25 03:52:56.151132 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-25 03:52:56.151141 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-25 03:52:56.151151 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-05-25 03:52:56.151161 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-05-25 03:52:56.151171 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-05-25 03:52:56.151180 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:52:56.151190 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:52:56.151200 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:52:56.151209 | orchestrator | 2025-05-25 03:52:56.151219 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-05-25 03:52:56.151229 | orchestrator | Sunday 25 May 2025 03:49:30 +0000 (0:00:55.987) 0:01:34.457 ************ 2025-05-25 03:52:56.151238 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.151248 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.151258 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.151267 | orchestrator | 2025-05-25 03:52:56.151277 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-05-25 03:52:56.151286 | orchestrator | Sunday 25 May 2025 03:49:31 +0000 (0:00:00.344) 0:01:34.801 ************ 2025-05-25 03:52:56.151296 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:52:56.151305 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:52:56.151315 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:52:56.151325 | orchestrator | 2025-05-25 03:52:56.151334 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-05-25 03:52:56.151344 | orchestrator | Sunday 25 May 2025 03:49:32 +0000 (0:00:00.958) 0:01:35.760 ************ 2025-05-25 03:52:56.151353 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:52:56.151363 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:52:56.151373 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:52:56.151382 | orchestrator | 2025-05-25 03:52:56.151392 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-05-25 03:52:56.151402 | orchestrator | Sunday 25 May 2025 03:49:33 +0000 (0:00:01.218) 0:01:36.979 ************ 2025-05-25 03:52:56.151412 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:52:56.151421 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:52:56.151431 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:52:56.151440 | orchestrator | 2025-05-25 03:52:56.151450 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-05-25 03:52:56.151460 | orchestrator | Sunday 25 May 2025 03:49:48 +0000 (0:00:15.213) 0:01:52.193 ************ 2025-05-25 03:52:56.151469 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:52:56.151479 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:52:56.151488 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:52:56.151498 | orchestrator | 2025-05-25 03:52:56.151508 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-05-25 03:52:56.151517 | orchestrator | Sunday 25 May 2025 03:49:49 +0000 (0:00:00.767) 0:01:52.960 ************ 2025-05-25 03:52:56.151527 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:52:56.151536 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:52:56.151546 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:52:56.151555 | orchestrator | 2025-05-25 03:52:56.151565 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-05-25 03:52:56.151580 | orchestrator | Sunday 25 May 2025 03:49:50 +0000 (0:00:00.793) 0:01:53.754 ************ 2025-05-25 03:52:56.151590 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:52:56.151600 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:52:56.151610 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:52:56.151620 | orchestrator | 2025-05-25 03:52:56.151651 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-05-25 03:52:56.151667 | orchestrator | Sunday 25 May 2025 03:49:50 +0000 (0:00:00.694) 0:01:54.448 ************ 2025-05-25 03:52:56.151682 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:52:56.151699 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:52:56.151716 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:52:56.151734 | orchestrator | 2025-05-25 03:52:56.151750 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-05-25 03:52:56.151768 | orchestrator | Sunday 25 May 2025 03:49:52 +0000 (0:00:01.068) 0:01:55.517 ************ 2025-05-25 03:52:56.151779 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:52:56.151788 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:52:56.151798 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:52:56.151808 | orchestrator | 2025-05-25 03:52:56.151817 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-05-25 03:52:56.151827 | orchestrator | Sunday 25 May 2025 03:49:52 +0000 (0:00:00.580) 0:01:56.097 ************ 2025-05-25 03:52:56.151836 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:52:56.151846 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:52:56.151856 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:52:56.151865 | orchestrator | 2025-05-25 03:52:56.151875 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-05-25 03:52:56.151884 | orchestrator | Sunday 25 May 2025 03:49:53 +0000 (0:00:00.761) 0:01:56.859 ************ 2025-05-25 03:52:56.151894 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:52:56.151903 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:52:56.151913 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:52:56.151922 | orchestrator | 2025-05-25 03:52:56.151932 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-05-25 03:52:56.151941 | orchestrator | Sunday 25 May 2025 03:49:54 +0000 (0:00:00.734) 0:01:57.593 ************ 2025-05-25 03:52:56.151951 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:52:56.151961 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:52:56.151970 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:52:56.151979 | orchestrator | 2025-05-25 03:52:56.151989 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-05-25 03:52:56.151999 | orchestrator | Sunday 25 May 2025 03:49:55 +0000 (0:00:01.421) 0:01:59.015 ************ 2025-05-25 03:52:56.152008 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:52:56.152018 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:52:56.152027 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:52:56.152036 | orchestrator | 2025-05-25 03:52:56.152046 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-05-25 03:52:56.152056 | orchestrator | Sunday 25 May 2025 03:49:56 +0000 (0:00:00.822) 0:01:59.838 ************ 2025-05-25 03:52:56.152065 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.152075 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.152084 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.152094 | orchestrator | 2025-05-25 03:52:56.152135 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-05-25 03:52:56.152147 | orchestrator | Sunday 25 May 2025 03:49:56 +0000 (0:00:00.287) 0:02:00.125 ************ 2025-05-25 03:52:56.152157 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.152167 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.152176 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.152186 | orchestrator | 2025-05-25 03:52:56.152195 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-05-25 03:52:56.152205 | orchestrator | Sunday 25 May 2025 03:49:56 +0000 (0:00:00.295) 0:02:00.421 ************ 2025-05-25 03:52:56.152223 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:52:56.152232 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:52:56.152242 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:52:56.152252 | orchestrator | 2025-05-25 03:52:56.152261 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-05-25 03:52:56.152271 | orchestrator | Sunday 25 May 2025 03:49:58 +0000 (0:00:01.083) 0:02:01.505 ************ 2025-05-25 03:52:56.152281 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:52:56.152291 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:52:56.152300 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:52:56.152310 | orchestrator | 2025-05-25 03:52:56.152320 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-05-25 03:52:56.152329 | orchestrator | Sunday 25 May 2025 03:49:58 +0000 (0:00:00.593) 0:02:02.098 ************ 2025-05-25 03:52:56.152339 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-25 03:52:56.152349 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-25 03:52:56.152359 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-25 03:52:56.152368 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-25 03:52:56.152379 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-25 03:52:56.152388 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-25 03:52:56.152398 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-25 03:52:56.152408 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-25 03:52:56.152417 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-25 03:52:56.152427 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-05-25 03:52:56.152436 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-25 03:52:56.152446 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-25 03:52:56.152467 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-05-25 03:52:56.152477 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-25 03:52:56.152487 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-25 03:52:56.152497 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-25 03:52:56.152507 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-25 03:52:56.152516 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-25 03:52:56.152526 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-25 03:52:56.152536 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-25 03:52:56.152545 | orchestrator | 2025-05-25 03:52:56.152555 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-05-25 03:52:56.152565 | orchestrator | 2025-05-25 03:52:56.152574 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-05-25 03:52:56.152584 | orchestrator | Sunday 25 May 2025 03:50:01 +0000 (0:00:02.973) 0:02:05.071 ************ 2025-05-25 03:52:56.152594 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:52:56.152604 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:52:56.152614 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:52:56.152636 | orchestrator | 2025-05-25 03:52:56.152646 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-05-25 03:52:56.152656 | orchestrator | Sunday 25 May 2025 03:50:02 +0000 (0:00:00.480) 0:02:05.551 ************ 2025-05-25 03:52:56.152665 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:52:56.152675 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:52:56.152690 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:52:56.152707 | orchestrator | 2025-05-25 03:52:56.152724 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-05-25 03:52:56.152743 | orchestrator | Sunday 25 May 2025 03:50:02 +0000 (0:00:00.643) 0:02:06.195 ************ 2025-05-25 03:52:56.152762 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:52:56.152778 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:52:56.152795 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:52:56.152806 | orchestrator | 2025-05-25 03:52:56.152815 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-05-25 03:52:56.152825 | orchestrator | Sunday 25 May 2025 03:50:03 +0000 (0:00:00.311) 0:02:06.506 ************ 2025-05-25 03:52:56.152835 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:52:56.152844 | orchestrator | 2025-05-25 03:52:56.152854 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-05-25 03:52:56.152863 | orchestrator | Sunday 25 May 2025 03:50:03 +0000 (0:00:00.630) 0:02:07.137 ************ 2025-05-25 03:52:56.152873 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:52:56.152883 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:52:56.152892 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:52:56.152902 | orchestrator | 2025-05-25 03:52:56.152911 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-05-25 03:52:56.152920 | orchestrator | Sunday 25 May 2025 03:50:03 +0000 (0:00:00.287) 0:02:07.425 ************ 2025-05-25 03:52:56.152930 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:52:56.152939 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:52:56.152949 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:52:56.152959 | orchestrator | 2025-05-25 03:52:56.152968 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-05-25 03:52:56.152977 | orchestrator | Sunday 25 May 2025 03:50:04 +0000 (0:00:00.300) 0:02:07.726 ************ 2025-05-25 03:52:56.152987 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:52:56.152996 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:52:56.153006 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:52:56.153016 | orchestrator | 2025-05-25 03:52:56.153025 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-05-25 03:52:56.153034 | orchestrator | Sunday 25 May 2025 03:50:04 +0000 (0:00:00.293) 0:02:08.019 ************ 2025-05-25 03:52:56.153044 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:52:56.153053 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:52:56.153062 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:52:56.153072 | orchestrator | 2025-05-25 03:52:56.153081 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-05-25 03:52:56.153091 | orchestrator | Sunday 25 May 2025 03:50:05 +0000 (0:00:01.406) 0:02:09.427 ************ 2025-05-25 03:52:56.153100 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:52:56.153135 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:52:56.153145 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:52:56.153155 | orchestrator | 2025-05-25 03:52:56.153165 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-05-25 03:52:56.153174 | orchestrator | 2025-05-25 03:52:56.153184 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-05-25 03:52:56.153194 | orchestrator | Sunday 25 May 2025 03:50:16 +0000 (0:00:10.499) 0:02:19.926 ************ 2025-05-25 03:52:56.153203 | orchestrator | ok: [testbed-manager] 2025-05-25 03:52:56.153213 | orchestrator | 2025-05-25 03:52:56.153223 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-05-25 03:52:56.153240 | orchestrator | Sunday 25 May 2025 03:50:17 +0000 (0:00:00.720) 0:02:20.647 ************ 2025-05-25 03:52:56.153250 | orchestrator | changed: [testbed-manager] 2025-05-25 03:52:56.153259 | orchestrator | 2025-05-25 03:52:56.153269 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-25 03:52:56.153278 | orchestrator | Sunday 25 May 2025 03:50:17 +0000 (0:00:00.404) 0:02:21.052 ************ 2025-05-25 03:52:56.153288 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-25 03:52:56.153297 | orchestrator | 2025-05-25 03:52:56.153314 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-25 03:52:56.153324 | orchestrator | Sunday 25 May 2025 03:50:18 +0000 (0:00:00.977) 0:02:22.030 ************ 2025-05-25 03:52:56.153334 | orchestrator | changed: [testbed-manager] 2025-05-25 03:52:56.153343 | orchestrator | 2025-05-25 03:52:56.153353 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-05-25 03:52:56.153363 | orchestrator | Sunday 25 May 2025 03:50:19 +0000 (0:00:00.804) 0:02:22.835 ************ 2025-05-25 03:52:56.153373 | orchestrator | changed: [testbed-manager] 2025-05-25 03:52:56.153382 | orchestrator | 2025-05-25 03:52:56.153392 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-05-25 03:52:56.153402 | orchestrator | Sunday 25 May 2025 03:50:19 +0000 (0:00:00.539) 0:02:23.374 ************ 2025-05-25 03:52:56.153411 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-25 03:52:56.153421 | orchestrator | 2025-05-25 03:52:56.153431 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-05-25 03:52:56.153440 | orchestrator | Sunday 25 May 2025 03:50:21 +0000 (0:00:01.620) 0:02:24.994 ************ 2025-05-25 03:52:56.153450 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-25 03:52:56.153459 | orchestrator | 2025-05-25 03:52:56.153469 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-05-25 03:52:56.153479 | orchestrator | Sunday 25 May 2025 03:50:22 +0000 (0:00:00.864) 0:02:25.859 ************ 2025-05-25 03:52:56.153488 | orchestrator | changed: [testbed-manager] 2025-05-25 03:52:56.153498 | orchestrator | 2025-05-25 03:52:56.153507 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-05-25 03:52:56.153517 | orchestrator | Sunday 25 May 2025 03:50:22 +0000 (0:00:00.487) 0:02:26.347 ************ 2025-05-25 03:52:56.153526 | orchestrator | changed: [testbed-manager] 2025-05-25 03:52:56.153536 | orchestrator | 2025-05-25 03:52:56.153545 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-05-25 03:52:56.153555 | orchestrator | 2025-05-25 03:52:56.153565 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-05-25 03:52:56.153574 | orchestrator | Sunday 25 May 2025 03:50:23 +0000 (0:00:00.520) 0:02:26.867 ************ 2025-05-25 03:52:56.153584 | orchestrator | ok: [testbed-manager] 2025-05-25 03:52:56.153593 | orchestrator | 2025-05-25 03:52:56.153603 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-05-25 03:52:56.153613 | orchestrator | Sunday 25 May 2025 03:50:23 +0000 (0:00:00.162) 0:02:27.029 ************ 2025-05-25 03:52:56.153623 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-05-25 03:52:56.153632 | orchestrator | 2025-05-25 03:52:56.153642 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-05-25 03:52:56.153651 | orchestrator | Sunday 25 May 2025 03:50:24 +0000 (0:00:00.510) 0:02:27.540 ************ 2025-05-25 03:52:56.153661 | orchestrator | ok: [testbed-manager] 2025-05-25 03:52:56.153670 | orchestrator | 2025-05-25 03:52:56.153680 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-05-25 03:52:56.153690 | orchestrator | Sunday 25 May 2025 03:50:24 +0000 (0:00:00.808) 0:02:28.348 ************ 2025-05-25 03:52:56.153699 | orchestrator | ok: [testbed-manager] 2025-05-25 03:52:56.153709 | orchestrator | 2025-05-25 03:52:56.153718 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-05-25 03:52:56.153729 | orchestrator | Sunday 25 May 2025 03:50:26 +0000 (0:00:01.508) 0:02:29.856 ************ 2025-05-25 03:52:56.153757 | orchestrator | changed: [testbed-manager] 2025-05-25 03:52:56.153776 | orchestrator | 2025-05-25 03:52:56.153794 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-05-25 03:52:56.153811 | orchestrator | Sunday 25 May 2025 03:50:27 +0000 (0:00:00.727) 0:02:30.583 ************ 2025-05-25 03:52:56.153827 | orchestrator | ok: [testbed-manager] 2025-05-25 03:52:56.153837 | orchestrator | 2025-05-25 03:52:56.153847 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-05-25 03:52:56.153856 | orchestrator | Sunday 25 May 2025 03:50:27 +0000 (0:00:00.449) 0:02:31.033 ************ 2025-05-25 03:52:56.153866 | orchestrator | changed: [testbed-manager] 2025-05-25 03:52:56.153875 | orchestrator | 2025-05-25 03:52:56.153885 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-05-25 03:52:56.153895 | orchestrator | Sunday 25 May 2025 03:50:34 +0000 (0:00:06.728) 0:02:37.761 ************ 2025-05-25 03:52:56.153904 | orchestrator | changed: [testbed-manager] 2025-05-25 03:52:56.153914 | orchestrator | 2025-05-25 03:52:56.153923 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-05-25 03:52:56.153933 | orchestrator | Sunday 25 May 2025 03:50:45 +0000 (0:00:10.837) 0:02:48.598 ************ 2025-05-25 03:52:56.153942 | orchestrator | ok: [testbed-manager] 2025-05-25 03:52:56.153951 | orchestrator | 2025-05-25 03:52:56.154810 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-05-25 03:52:56.154860 | orchestrator | 2025-05-25 03:52:56.154868 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-05-25 03:52:56.154875 | orchestrator | Sunday 25 May 2025 03:50:45 +0000 (0:00:00.475) 0:02:49.074 ************ 2025-05-25 03:52:56.154882 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:52:56.154890 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:52:56.154896 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:52:56.154903 | orchestrator | 2025-05-25 03:52:56.154909 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-05-25 03:52:56.154916 | orchestrator | Sunday 25 May 2025 03:50:46 +0000 (0:00:00.470) 0:02:49.545 ************ 2025-05-25 03:52:56.154923 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.154931 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.154939 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.154947 | orchestrator | 2025-05-25 03:52:56.154954 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-05-25 03:52:56.154962 | orchestrator | Sunday 25 May 2025 03:50:46 +0000 (0:00:00.332) 0:02:49.878 ************ 2025-05-25 03:52:56.154970 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:52:56.154977 | orchestrator | 2025-05-25 03:52:56.154985 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-05-25 03:52:56.155005 | orchestrator | Sunday 25 May 2025 03:50:46 +0000 (0:00:00.550) 0:02:50.428 ************ 2025-05-25 03:52:56.155013 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-25 03:52:56.155021 | orchestrator | 2025-05-25 03:52:56.155033 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-05-25 03:52:56.155041 | orchestrator | Sunday 25 May 2025 03:50:47 +0000 (0:00:00.853) 0:02:51.281 ************ 2025-05-25 03:52:56.155049 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 03:52:56.155056 | orchestrator | 2025-05-25 03:52:56.155064 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-05-25 03:52:56.155071 | orchestrator | Sunday 25 May 2025 03:50:48 +0000 (0:00:00.769) 0:02:52.050 ************ 2025-05-25 03:52:56.155079 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.155086 | orchestrator | 2025-05-25 03:52:56.155093 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-05-25 03:52:56.155101 | orchestrator | Sunday 25 May 2025 03:50:49 +0000 (0:00:00.505) 0:02:52.556 ************ 2025-05-25 03:52:56.155127 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 03:52:56.155144 | orchestrator | 2025-05-25 03:52:56.155152 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-05-25 03:52:56.155160 | orchestrator | Sunday 25 May 2025 03:50:49 +0000 (0:00:00.891) 0:02:53.448 ************ 2025-05-25 03:52:56.155167 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.155175 | orchestrator | 2025-05-25 03:52:56.155182 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-05-25 03:52:56.155190 | orchestrator | Sunday 25 May 2025 03:50:50 +0000 (0:00:00.194) 0:02:53.642 ************ 2025-05-25 03:52:56.155197 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.155204 | orchestrator | 2025-05-25 03:52:56.155212 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-05-25 03:52:56.155219 | orchestrator | Sunday 25 May 2025 03:50:50 +0000 (0:00:00.326) 0:02:53.969 ************ 2025-05-25 03:52:56.155227 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.155234 | orchestrator | 2025-05-25 03:52:56.155241 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-05-25 03:52:56.155249 | orchestrator | Sunday 25 May 2025 03:50:50 +0000 (0:00:00.231) 0:02:54.200 ************ 2025-05-25 03:52:56.155256 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.155264 | orchestrator | 2025-05-25 03:52:56.155272 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-05-25 03:52:56.155279 | orchestrator | Sunday 25 May 2025 03:50:50 +0000 (0:00:00.182) 0:02:54.383 ************ 2025-05-25 03:52:56.155287 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-25 03:52:56.155294 | orchestrator | 2025-05-25 03:52:56.155302 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-05-25 03:52:56.155309 | orchestrator | Sunday 25 May 2025 03:50:55 +0000 (0:00:04.679) 0:02:59.062 ************ 2025-05-25 03:52:56.155316 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-05-25 03:52:56.155323 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-05-25 03:52:56.155331 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (29 retries left). 2025-05-25 03:52:56.155338 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-05-25 03:52:56.155345 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-05-25 03:52:56.155351 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-05-25 03:52:56.155358 | orchestrator | 2025-05-25 03:52:56.155364 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-05-25 03:52:56.155371 | orchestrator | Sunday 25 May 2025 03:52:29 +0000 (0:01:33.424) 0:04:32.487 ************ 2025-05-25 03:52:56.155378 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 03:52:56.155385 | orchestrator | 2025-05-25 03:52:56.155391 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-05-25 03:52:56.155398 | orchestrator | Sunday 25 May 2025 03:52:30 +0000 (0:00:01.505) 0:04:33.992 ************ 2025-05-25 03:52:56.155404 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-25 03:52:56.155411 | orchestrator | 2025-05-25 03:52:56.155418 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-05-25 03:52:56.155424 | orchestrator | Sunday 25 May 2025 03:52:32 +0000 (0:00:02.011) 0:04:36.004 ************ 2025-05-25 03:52:56.155431 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-25 03:52:56.155437 | orchestrator | 2025-05-25 03:52:56.155444 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-05-25 03:52:56.155450 | orchestrator | Sunday 25 May 2025 03:52:34 +0000 (0:00:01.542) 0:04:37.546 ************ 2025-05-25 03:52:56.155457 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.155464 | orchestrator | 2025-05-25 03:52:56.155470 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-05-25 03:52:56.155477 | orchestrator | Sunday 25 May 2025 03:52:34 +0000 (0:00:00.223) 0:04:37.769 ************ 2025-05-25 03:52:56.155488 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-05-25 03:52:56.155495 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-05-25 03:52:56.155501 | orchestrator | 2025-05-25 03:52:56.155508 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-05-25 03:52:56.155514 | orchestrator | Sunday 25 May 2025 03:52:36 +0000 (0:00:02.675) 0:04:40.445 ************ 2025-05-25 03:52:56.155521 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.155528 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.155534 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.155541 | orchestrator | 2025-05-25 03:52:56.155547 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-05-25 03:52:56.155554 | orchestrator | Sunday 25 May 2025 03:52:37 +0000 (0:00:00.371) 0:04:40.817 ************ 2025-05-25 03:52:56.155565 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:52:56.155572 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:52:56.155578 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:52:56.155585 | orchestrator | 2025-05-25 03:52:56.155592 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-05-25 03:52:56.155598 | orchestrator | 2025-05-25 03:52:56.155608 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-05-25 03:52:56.155615 | orchestrator | Sunday 25 May 2025 03:52:38 +0000 (0:00:00.822) 0:04:41.640 ************ 2025-05-25 03:52:56.155622 | orchestrator | ok: [testbed-manager] 2025-05-25 03:52:56.155629 | orchestrator | 2025-05-25 03:52:56.155635 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-05-25 03:52:56.155642 | orchestrator | Sunday 25 May 2025 03:52:38 +0000 (0:00:00.130) 0:04:41.771 ************ 2025-05-25 03:52:56.155649 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-05-25 03:52:56.155655 | orchestrator | 2025-05-25 03:52:56.155662 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-05-25 03:52:56.155669 | orchestrator | Sunday 25 May 2025 03:52:38 +0000 (0:00:00.427) 0:04:42.198 ************ 2025-05-25 03:52:56.155675 | orchestrator | changed: [testbed-manager] 2025-05-25 03:52:56.155682 | orchestrator | 2025-05-25 03:52:56.155688 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-05-25 03:52:56.155695 | orchestrator | 2025-05-25 03:52:56.155702 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-05-25 03:52:56.155708 | orchestrator | Sunday 25 May 2025 03:52:45 +0000 (0:00:06.674) 0:04:48.872 ************ 2025-05-25 03:52:56.155715 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:52:56.155721 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:52:56.155728 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:52:56.155735 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:52:56.155741 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:52:56.155748 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:52:56.155754 | orchestrator | 2025-05-25 03:52:56.155761 | orchestrator | TASK [Manage labels] *********************************************************** 2025-05-25 03:52:56.155768 | orchestrator | Sunday 25 May 2025 03:52:45 +0000 (0:00:00.487) 0:04:49.359 ************ 2025-05-25 03:52:56.155775 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-25 03:52:56.155781 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-25 03:52:56.155788 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-25 03:52:56.155794 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-25 03:52:56.155801 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-25 03:52:56.155808 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-25 03:52:56.155814 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-25 03:52:56.155825 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-25 03:52:56.155832 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-25 03:52:56.155839 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-25 03:52:56.155846 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-25 03:52:56.155852 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-25 03:52:56.155859 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-25 03:52:56.155866 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-25 03:52:56.155872 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-25 03:52:56.155879 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-25 03:52:56.155886 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-25 03:52:56.155892 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-25 03:52:56.155899 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-25 03:52:56.155905 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-25 03:52:56.155912 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-25 03:52:56.155919 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-25 03:52:56.155925 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-25 03:52:56.155932 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-25 03:52:56.155938 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-25 03:52:56.155945 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-25 03:52:56.155952 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-25 03:52:56.155958 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-25 03:52:56.155965 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-25 03:52:56.155976 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-25 03:52:56.155983 | orchestrator | 2025-05-25 03:52:56.155990 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-05-25 03:52:56.156000 | orchestrator | Sunday 25 May 2025 03:52:54 +0000 (0:00:08.362) 0:04:57.722 ************ 2025-05-25 03:52:56.156007 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:52:56.156014 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:52:56.156020 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:52:56.156027 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.156034 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.156041 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.156047 | orchestrator | 2025-05-25 03:52:56.156054 | orchestrator | TASK [Manage taints] *********************************************************** 2025-05-25 03:52:56.156061 | orchestrator | Sunday 25 May 2025 03:52:54 +0000 (0:00:00.400) 0:04:58.122 ************ 2025-05-25 03:52:56.156067 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:52:56.156074 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:52:56.156080 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:52:56.156087 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:52:56.156094 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:52:56.156100 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:52:56.156135 | orchestrator | 2025-05-25 03:52:56.156143 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:52:56.156150 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:52:56.156158 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-25 03:52:56.156165 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-05-25 03:52:56.156172 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-05-25 03:52:56.156179 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-25 03:52:56.156185 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-25 03:52:56.156192 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-25 03:52:56.156199 | orchestrator | 2025-05-25 03:52:56.156206 | orchestrator | 2025-05-25 03:52:56.156212 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:52:56.156219 | orchestrator | Sunday 25 May 2025 03:52:55 +0000 (0:00:00.555) 0:04:58.678 ************ 2025-05-25 03:52:56.156226 | orchestrator | =============================================================================== 2025-05-25 03:52:56.156232 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 93.42s 2025-05-25 03:52:56.156239 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.98s 2025-05-25 03:52:56.156246 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 15.21s 2025-05-25 03:52:56.156252 | orchestrator | kubectl : Install required packages ------------------------------------ 10.84s 2025-05-25 03:52:56.156259 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.50s 2025-05-25 03:52:56.156265 | orchestrator | Manage labels ----------------------------------------------------------- 8.36s 2025-05-25 03:52:56.156272 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.73s 2025-05-25 03:52:56.156279 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.67s 2025-05-25 03:52:56.156285 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.18s 2025-05-25 03:52:56.156292 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.68s 2025-05-25 03:52:56.156298 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.97s 2025-05-25 03:52:56.156305 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.68s 2025-05-25 03:52:56.156312 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.17s 2025-05-25 03:52:56.156318 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.01s 2025-05-25 03:52:56.156325 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.93s 2025-05-25 03:52:56.156332 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.90s 2025-05-25 03:52:56.156338 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.90s 2025-05-25 03:52:56.156345 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.88s 2025-05-25 03:52:56.156351 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.78s 2025-05-25 03:52:56.156358 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.62s 2025-05-25 03:52:56.156369 | orchestrator | 2025-05-25 03:52:56 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:56.156380 | orchestrator | 2025-05-25 03:52:56 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:56.156390 | orchestrator | 2025-05-25 03:52:56 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:56.156397 | orchestrator | 2025-05-25 03:52:56 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:56.156403 | orchestrator | 2025-05-25 03:52:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:52:59.204783 | orchestrator | 2025-05-25 03:52:59 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:52:59.207172 | orchestrator | 2025-05-25 03:52:59 | INFO  | Task c53cb6e7-65fd-4675-ad53-0be711ed8629 is in state STARTED 2025-05-25 03:52:59.211697 | orchestrator | 2025-05-25 03:52:59 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:52:59.213211 | orchestrator | 2025-05-25 03:52:59 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:52:59.215477 | orchestrator | 2025-05-25 03:52:59 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:52:59.217221 | orchestrator | 2025-05-25 03:52:59 | INFO  | Task 4cc56822-e768-4a8b-98c4-e489f0aef08a is in state STARTED 2025-05-25 03:52:59.217369 | orchestrator | 2025-05-25 03:52:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:02.268039 | orchestrator | 2025-05-25 03:53:02 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:02.271649 | orchestrator | 2025-05-25 03:53:02 | INFO  | Task c53cb6e7-65fd-4675-ad53-0be711ed8629 is in state STARTED 2025-05-25 03:53:02.273712 | orchestrator | 2025-05-25 03:53:02 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:02.275947 | orchestrator | 2025-05-25 03:53:02 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:02.278191 | orchestrator | 2025-05-25 03:53:02 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:53:02.279338 | orchestrator | 2025-05-25 03:53:02 | INFO  | Task 4cc56822-e768-4a8b-98c4-e489f0aef08a is in state SUCCESS 2025-05-25 03:53:02.279706 | orchestrator | 2025-05-25 03:53:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:05.317803 | orchestrator | 2025-05-25 03:53:05 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:05.317897 | orchestrator | 2025-05-25 03:53:05 | INFO  | Task c53cb6e7-65fd-4675-ad53-0be711ed8629 is in state STARTED 2025-05-25 03:53:05.318963 | orchestrator | 2025-05-25 03:53:05 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:05.321220 | orchestrator | 2025-05-25 03:53:05 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:05.321848 | orchestrator | 2025-05-25 03:53:05 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state STARTED 2025-05-25 03:53:05.321874 | orchestrator | 2025-05-25 03:53:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:08.366406 | orchestrator | 2025-05-25 03:53:08 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:08.366506 | orchestrator | 2025-05-25 03:53:08 | INFO  | Task c53cb6e7-65fd-4675-ad53-0be711ed8629 is in state SUCCESS 2025-05-25 03:53:08.366991 | orchestrator | 2025-05-25 03:53:08 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:08.367879 | orchestrator | 2025-05-25 03:53:08 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:08.372281 | orchestrator | 2025-05-25 03:53:08 | INFO  | Task a5c3152a-f935-4d0c-ba37-5585c5fa0428 is in state SUCCESS 2025-05-25 03:53:08.372387 | orchestrator | 2025-05-25 03:53:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:08.374463 | orchestrator | 2025-05-25 03:53:08.374517 | orchestrator | 2025-05-25 03:53:08.374531 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-05-25 03:53:08.374543 | orchestrator | 2025-05-25 03:53:08.374554 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-25 03:53:08.374694 | orchestrator | Sunday 25 May 2025 03:52:59 +0000 (0:00:00.165) 0:00:00.165 ************ 2025-05-25 03:53:08.374711 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-25 03:53:08.374723 | orchestrator | 2025-05-25 03:53:08.374734 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-25 03:53:08.374745 | orchestrator | Sunday 25 May 2025 03:53:00 +0000 (0:00:00.854) 0:00:01.020 ************ 2025-05-25 03:53:08.374756 | orchestrator | changed: [testbed-manager] 2025-05-25 03:53:08.374767 | orchestrator | 2025-05-25 03:53:08.374778 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-05-25 03:53:08.374789 | orchestrator | Sunday 25 May 2025 03:53:01 +0000 (0:00:01.255) 0:00:02.275 ************ 2025-05-25 03:53:08.374800 | orchestrator | changed: [testbed-manager] 2025-05-25 03:53:08.374811 | orchestrator | 2025-05-25 03:53:08.374831 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:53:08.374843 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:53:08.374856 | orchestrator | 2025-05-25 03:53:08.374867 | orchestrator | 2025-05-25 03:53:08.374878 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:53:08.374889 | orchestrator | Sunday 25 May 2025 03:53:01 +0000 (0:00:00.517) 0:00:02.793 ************ 2025-05-25 03:53:08.374900 | orchestrator | =============================================================================== 2025-05-25 03:53:08.374911 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.26s 2025-05-25 03:53:08.374922 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.85s 2025-05-25 03:53:08.374932 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.52s 2025-05-25 03:53:08.374943 | orchestrator | 2025-05-25 03:53:08.374954 | orchestrator | 2025-05-25 03:53:08.374966 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-05-25 03:53:08.374977 | orchestrator | 2025-05-25 03:53:08.374987 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-05-25 03:53:08.374998 | orchestrator | Sunday 25 May 2025 03:52:59 +0000 (0:00:00.166) 0:00:00.166 ************ 2025-05-25 03:53:08.375009 | orchestrator | ok: [testbed-manager] 2025-05-25 03:53:08.375021 | orchestrator | 2025-05-25 03:53:08.375034 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-05-25 03:53:08.375047 | orchestrator | Sunday 25 May 2025 03:52:59 +0000 (0:00:00.596) 0:00:00.762 ************ 2025-05-25 03:53:08.375059 | orchestrator | ok: [testbed-manager] 2025-05-25 03:53:08.375072 | orchestrator | 2025-05-25 03:53:08.375085 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-25 03:53:08.375098 | orchestrator | Sunday 25 May 2025 03:53:00 +0000 (0:00:00.637) 0:00:01.400 ************ 2025-05-25 03:53:08.375138 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-25 03:53:08.375152 | orchestrator | 2025-05-25 03:53:08.375165 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-25 03:53:08.375177 | orchestrator | Sunday 25 May 2025 03:53:01 +0000 (0:00:00.829) 0:00:02.229 ************ 2025-05-25 03:53:08.375191 | orchestrator | changed: [testbed-manager] 2025-05-25 03:53:08.375230 | orchestrator | 2025-05-25 03:53:08.375259 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-05-25 03:53:08.375278 | orchestrator | Sunday 25 May 2025 03:53:02 +0000 (0:00:01.200) 0:00:03.430 ************ 2025-05-25 03:53:08.375296 | orchestrator | changed: [testbed-manager] 2025-05-25 03:53:08.375313 | orchestrator | 2025-05-25 03:53:08.375331 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-05-25 03:53:08.375350 | orchestrator | Sunday 25 May 2025 03:53:03 +0000 (0:00:00.685) 0:00:04.116 ************ 2025-05-25 03:53:08.375369 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-25 03:53:08.375387 | orchestrator | 2025-05-25 03:53:08.375406 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-05-25 03:53:08.375421 | orchestrator | Sunday 25 May 2025 03:53:04 +0000 (0:00:01.650) 0:00:05.766 ************ 2025-05-25 03:53:08.375431 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-25 03:53:08.375442 | orchestrator | 2025-05-25 03:53:08.375453 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-05-25 03:53:08.375464 | orchestrator | Sunday 25 May 2025 03:53:05 +0000 (0:00:00.849) 0:00:06.616 ************ 2025-05-25 03:53:08.375474 | orchestrator | ok: [testbed-manager] 2025-05-25 03:53:08.375485 | orchestrator | 2025-05-25 03:53:08.375495 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-05-25 03:53:08.375506 | orchestrator | Sunday 25 May 2025 03:53:06 +0000 (0:00:00.441) 0:00:07.058 ************ 2025-05-25 03:53:08.375517 | orchestrator | ok: [testbed-manager] 2025-05-25 03:53:08.375527 | orchestrator | 2025-05-25 03:53:08.375538 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:53:08.375549 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:53:08.375560 | orchestrator | 2025-05-25 03:53:08.375570 | orchestrator | 2025-05-25 03:53:08.375581 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:53:08.375592 | orchestrator | Sunday 25 May 2025 03:53:06 +0000 (0:00:00.331) 0:00:07.389 ************ 2025-05-25 03:53:08.375602 | orchestrator | =============================================================================== 2025-05-25 03:53:08.375613 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.65s 2025-05-25 03:53:08.375623 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.20s 2025-05-25 03:53:08.375634 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.85s 2025-05-25 03:53:08.375661 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.83s 2025-05-25 03:53:08.375672 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.69s 2025-05-25 03:53:08.375683 | orchestrator | Create .kube directory -------------------------------------------------- 0.64s 2025-05-25 03:53:08.375693 | orchestrator | Get home directory of operator user ------------------------------------- 0.60s 2025-05-25 03:53:08.375704 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.44s 2025-05-25 03:53:08.375715 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.33s 2025-05-25 03:53:08.375726 | orchestrator | 2025-05-25 03:53:08.375736 | orchestrator | 2025-05-25 03:53:08.375747 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-25 03:53:08.375757 | orchestrator | 2025-05-25 03:53:08.375767 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-25 03:53:08.375778 | orchestrator | Sunday 25 May 2025 03:50:55 +0000 (0:00:00.112) 0:00:00.112 ************ 2025-05-25 03:53:08.375789 | orchestrator | ok: [localhost] => { 2025-05-25 03:53:08.375807 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-25 03:53:08.375819 | orchestrator | } 2025-05-25 03:53:08.375830 | orchestrator | 2025-05-25 03:53:08.375840 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-25 03:53:08.375862 | orchestrator | Sunday 25 May 2025 03:50:55 +0000 (0:00:00.069) 0:00:00.181 ************ 2025-05-25 03:53:08.375882 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-25 03:53:08.375915 | orchestrator | ...ignoring 2025-05-25 03:53:08.375934 | orchestrator | 2025-05-25 03:53:08.375952 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-25 03:53:08.375971 | orchestrator | Sunday 25 May 2025 03:50:59 +0000 (0:00:03.575) 0:00:03.757 ************ 2025-05-25 03:53:08.375990 | orchestrator | skipping: [localhost] 2025-05-25 03:53:08.376010 | orchestrator | 2025-05-25 03:53:08.376022 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-25 03:53:08.376033 | orchestrator | Sunday 25 May 2025 03:50:59 +0000 (0:00:00.044) 0:00:03.802 ************ 2025-05-25 03:53:08.376043 | orchestrator | ok: [localhost] 2025-05-25 03:53:08.376054 | orchestrator | 2025-05-25 03:53:08.376065 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 03:53:08.376075 | orchestrator | 2025-05-25 03:53:08.376086 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 03:53:08.376097 | orchestrator | Sunday 25 May 2025 03:50:59 +0000 (0:00:00.220) 0:00:04.022 ************ 2025-05-25 03:53:08.376136 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:53:08.376148 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:53:08.376159 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:53:08.376170 | orchestrator | 2025-05-25 03:53:08.376181 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 03:53:08.376192 | orchestrator | Sunday 25 May 2025 03:51:00 +0000 (0:00:00.585) 0:00:04.608 ************ 2025-05-25 03:53:08.376202 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-25 03:53:08.376214 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-25 03:53:08.376225 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-25 03:53:08.376235 | orchestrator | 2025-05-25 03:53:08.376246 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-25 03:53:08.376257 | orchestrator | 2025-05-25 03:53:08.376268 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-25 03:53:08.376279 | orchestrator | Sunday 25 May 2025 03:51:01 +0000 (0:00:01.019) 0:00:05.627 ************ 2025-05-25 03:53:08.376290 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:53:08.376301 | orchestrator | 2025-05-25 03:53:08.376311 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-25 03:53:08.376322 | orchestrator | Sunday 25 May 2025 03:51:02 +0000 (0:00:01.611) 0:00:07.238 ************ 2025-05-25 03:53:08.376333 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:53:08.376343 | orchestrator | 2025-05-25 03:53:08.376354 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-25 03:53:08.376364 | orchestrator | Sunday 25 May 2025 03:51:03 +0000 (0:00:01.108) 0:00:08.347 ************ 2025-05-25 03:53:08.376375 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:53:08.376386 | orchestrator | 2025-05-25 03:53:08.376398 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-25 03:53:08.376416 | orchestrator | Sunday 25 May 2025 03:51:04 +0000 (0:00:00.551) 0:00:08.898 ************ 2025-05-25 03:53:08.376444 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:53:08.376464 | orchestrator | 2025-05-25 03:53:08.376482 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-25 03:53:08.376499 | orchestrator | Sunday 25 May 2025 03:51:04 +0000 (0:00:00.309) 0:00:09.208 ************ 2025-05-25 03:53:08.376516 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:53:08.376533 | orchestrator | 2025-05-25 03:53:08.376551 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-25 03:53:08.376568 | orchestrator | Sunday 25 May 2025 03:51:05 +0000 (0:00:00.349) 0:00:09.558 ************ 2025-05-25 03:53:08.376599 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:53:08.376617 | orchestrator | 2025-05-25 03:53:08.376636 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-25 03:53:08.376651 | orchestrator | Sunday 25 May 2025 03:51:05 +0000 (0:00:00.517) 0:00:10.075 ************ 2025-05-25 03:53:08.376662 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:53:08.376673 | orchestrator | 2025-05-25 03:53:08.376683 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-25 03:53:08.376704 | orchestrator | Sunday 25 May 2025 03:51:06 +0000 (0:00:00.608) 0:00:10.684 ************ 2025-05-25 03:53:08.376716 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:53:08.376726 | orchestrator | 2025-05-25 03:53:08.376795 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-25 03:53:08.376807 | orchestrator | Sunday 25 May 2025 03:51:07 +0000 (0:00:00.828) 0:00:11.512 ************ 2025-05-25 03:53:08.376818 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:53:08.376829 | orchestrator | 2025-05-25 03:53:08.376840 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-25 03:53:08.376851 | orchestrator | Sunday 25 May 2025 03:51:07 +0000 (0:00:00.472) 0:00:11.984 ************ 2025-05-25 03:53:08.376861 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:53:08.376900 | orchestrator | 2025-05-25 03:53:08.376912 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-25 03:53:08.376923 | orchestrator | Sunday 25 May 2025 03:51:07 +0000 (0:00:00.391) 0:00:12.376 ************ 2025-05-25 03:53:08.376947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 03:53:08.376965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 03:53:08.376979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 03:53:08.377000 | orchestrator | 2025-05-25 03:53:08.377011 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-25 03:53:08.377023 | orchestrator | Sunday 25 May 2025 03:51:09 +0000 (0:00:01.208) 0:00:13.584 ************ 2025-05-25 03:53:08.377049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 03:53:08.377062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 03:53:08.377075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 03:53:08.377093 | orchestrator | 2025-05-25 03:53:08.377147 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-25 03:53:08.377159 | orchestrator | Sunday 25 May 2025 03:51:11 +0000 (0:00:02.314) 0:00:15.899 ************ 2025-05-25 03:53:08.377170 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-25 03:53:08.377182 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-25 03:53:08.377193 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-25 03:53:08.377204 | orchestrator | 2025-05-25 03:53:08.377214 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-25 03:53:08.377225 | orchestrator | Sunday 25 May 2025 03:51:12 +0000 (0:00:01.412) 0:00:17.312 ************ 2025-05-25 03:53:08.377236 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-25 03:53:08.377247 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-25 03:53:08.377257 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-25 03:53:08.377268 | orchestrator | 2025-05-25 03:53:08.377279 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-25 03:53:08.377296 | orchestrator | Sunday 25 May 2025 03:51:16 +0000 (0:00:03.208) 0:00:20.520 ************ 2025-05-25 03:53:08.377308 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-25 03:53:08.377319 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-25 03:53:08.377330 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-25 03:53:08.377341 | orchestrator | 2025-05-25 03:53:08.377352 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-25 03:53:08.377362 | orchestrator | Sunday 25 May 2025 03:51:18 +0000 (0:00:01.990) 0:00:22.510 ************ 2025-05-25 03:53:08.377373 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-25 03:53:08.377384 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-25 03:53:08.377395 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-25 03:53:08.377406 | orchestrator | 2025-05-25 03:53:08.377421 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-25 03:53:08.377432 | orchestrator | Sunday 25 May 2025 03:51:19 +0000 (0:00:01.614) 0:00:24.125 ************ 2025-05-25 03:53:08.377443 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-25 03:53:08.377454 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-25 03:53:08.377465 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-25 03:53:08.377476 | orchestrator | 2025-05-25 03:53:08.377486 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-25 03:53:08.377497 | orchestrator | Sunday 25 May 2025 03:51:20 +0000 (0:00:01.360) 0:00:25.487 ************ 2025-05-25 03:53:08.377508 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-25 03:53:08.377518 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-25 03:53:08.377529 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-25 03:53:08.377562 | orchestrator | 2025-05-25 03:53:08.377584 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-25 03:53:08.377595 | orchestrator | Sunday 25 May 2025 03:51:22 +0000 (0:00:01.259) 0:00:26.747 ************ 2025-05-25 03:53:08.377606 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:53:08.377617 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:53:08.377628 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:53:08.377638 | orchestrator | 2025-05-25 03:53:08.377649 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-25 03:53:08.377660 | orchestrator | Sunday 25 May 2025 03:51:22 +0000 (0:00:00.570) 0:00:27.318 ************ 2025-05-25 03:53:08.377672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 03:53:08.377691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 03:53:08.377709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 03:53:08.377721 | orchestrator | 2025-05-25 03:53:08.377732 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-25 03:53:08.377743 | orchestrator | Sunday 25 May 2025 03:51:24 +0000 (0:00:01.434) 0:00:28.753 ************ 2025-05-25 03:53:08.377760 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:53:08.377771 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:53:08.377782 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:53:08.377793 | orchestrator | 2025-05-25 03:53:08.377809 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-25 03:53:08.377829 | orchestrator | Sunday 25 May 2025 03:51:25 +0000 (0:00:00.949) 0:00:29.702 ************ 2025-05-25 03:53:08.377850 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:53:08.377871 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:53:08.377890 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:53:08.377902 | orchestrator | 2025-05-25 03:53:08.377912 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-25 03:53:08.377923 | orchestrator | Sunday 25 May 2025 03:51:32 +0000 (0:00:07.264) 0:00:36.966 ************ 2025-05-25 03:53:08.377934 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:53:08.377944 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:53:08.377955 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:53:08.377965 | orchestrator | 2025-05-25 03:53:08.377976 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-25 03:53:08.377987 | orchestrator | 2025-05-25 03:53:08.377997 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-25 03:53:08.378008 | orchestrator | Sunday 25 May 2025 03:51:32 +0000 (0:00:00.501) 0:00:37.467 ************ 2025-05-25 03:53:08.378179 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:53:08.378193 | orchestrator | 2025-05-25 03:53:08.378204 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-25 03:53:08.378214 | orchestrator | Sunday 25 May 2025 03:51:33 +0000 (0:00:00.643) 0:00:38.111 ************ 2025-05-25 03:53:08.378225 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:53:08.378236 | orchestrator | 2025-05-25 03:53:08.378247 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-25 03:53:08.378258 | orchestrator | Sunday 25 May 2025 03:51:33 +0000 (0:00:00.244) 0:00:38.355 ************ 2025-05-25 03:53:08.378268 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:53:08.378279 | orchestrator | 2025-05-25 03:53:08.378294 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-25 03:53:08.378318 | orchestrator | Sunday 25 May 2025 03:51:35 +0000 (0:00:01.855) 0:00:40.211 ************ 2025-05-25 03:53:08.378343 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:53:08.378360 | orchestrator | 2025-05-25 03:53:08.378378 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-25 03:53:08.378395 | orchestrator | 2025-05-25 03:53:08.378411 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-25 03:53:08.378430 | orchestrator | Sunday 25 May 2025 03:52:29 +0000 (0:00:54.146) 0:01:34.357 ************ 2025-05-25 03:53:08.378447 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:53:08.378466 | orchestrator | 2025-05-25 03:53:08.378485 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-25 03:53:08.378503 | orchestrator | Sunday 25 May 2025 03:52:30 +0000 (0:00:00.613) 0:01:34.971 ************ 2025-05-25 03:53:08.378520 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:53:08.378531 | orchestrator | 2025-05-25 03:53:08.378541 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-25 03:53:08.378552 | orchestrator | Sunday 25 May 2025 03:52:30 +0000 (0:00:00.536) 0:01:35.507 ************ 2025-05-25 03:53:08.378563 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:53:08.378573 | orchestrator | 2025-05-25 03:53:08.378584 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-25 03:53:08.378594 | orchestrator | Sunday 25 May 2025 03:52:32 +0000 (0:00:01.870) 0:01:37.378 ************ 2025-05-25 03:53:08.378605 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:53:08.378615 | orchestrator | 2025-05-25 03:53:08.378626 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-25 03:53:08.378648 | orchestrator | 2025-05-25 03:53:08.378659 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-25 03:53:08.378669 | orchestrator | Sunday 25 May 2025 03:52:46 +0000 (0:00:13.722) 0:01:51.101 ************ 2025-05-25 03:53:08.378680 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:53:08.378691 | orchestrator | 2025-05-25 03:53:08.378711 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-25 03:53:08.378723 | orchestrator | Sunday 25 May 2025 03:52:47 +0000 (0:00:00.643) 0:01:51.744 ************ 2025-05-25 03:53:08.378733 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:53:08.378744 | orchestrator | 2025-05-25 03:53:08.378755 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-25 03:53:08.378766 | orchestrator | Sunday 25 May 2025 03:52:47 +0000 (0:00:00.247) 0:01:51.992 ************ 2025-05-25 03:53:08.378776 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:53:08.378787 | orchestrator | 2025-05-25 03:53:08.378798 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-25 03:53:08.378809 | orchestrator | Sunday 25 May 2025 03:52:49 +0000 (0:00:01.708) 0:01:53.700 ************ 2025-05-25 03:53:08.378819 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:53:08.378830 | orchestrator | 2025-05-25 03:53:08.378840 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-25 03:53:08.378851 | orchestrator | 2025-05-25 03:53:08.378861 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-25 03:53:08.378884 | orchestrator | Sunday 25 May 2025 03:53:03 +0000 (0:00:13.939) 0:02:07.640 ************ 2025-05-25 03:53:08.378896 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:53:08.378906 | orchestrator | 2025-05-25 03:53:08.378917 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-25 03:53:08.378928 | orchestrator | Sunday 25 May 2025 03:53:04 +0000 (0:00:01.280) 0:02:08.920 ************ 2025-05-25 03:53:08.378938 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-25 03:53:08.378949 | orchestrator | enable_outward_rabbitmq_True 2025-05-25 03:53:08.378960 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-25 03:53:08.378970 | orchestrator | outward_rabbitmq_restart 2025-05-25 03:53:08.378981 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:53:08.378992 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:53:08.379002 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:53:08.379013 | orchestrator | 2025-05-25 03:53:08.379024 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-25 03:53:08.379034 | orchestrator | skipping: no hosts matched 2025-05-25 03:53:08.379045 | orchestrator | 2025-05-25 03:53:08.379055 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-25 03:53:08.379066 | orchestrator | skipping: no hosts matched 2025-05-25 03:53:08.379077 | orchestrator | 2025-05-25 03:53:08.379087 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-25 03:53:08.379098 | orchestrator | skipping: no hosts matched 2025-05-25 03:53:08.379131 | orchestrator | 2025-05-25 03:53:08.379143 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:53:08.379154 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-25 03:53:08.379165 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-25 03:53:08.379176 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:53:08.379187 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 03:53:08.379198 | orchestrator | 2025-05-25 03:53:08.379216 | orchestrator | 2025-05-25 03:53:08.379227 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:53:08.379238 | orchestrator | Sunday 25 May 2025 03:53:06 +0000 (0:00:02.389) 0:02:11.309 ************ 2025-05-25 03:53:08.379249 | orchestrator | =============================================================================== 2025-05-25 03:53:08.379260 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 81.81s 2025-05-25 03:53:08.379270 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.26s 2025-05-25 03:53:08.379281 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.43s 2025-05-25 03:53:08.379292 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.58s 2025-05-25 03:53:08.379303 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.21s 2025-05-25 03:53:08.379313 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.39s 2025-05-25 03:53:08.379324 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.31s 2025-05-25 03:53:08.379335 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.99s 2025-05-25 03:53:08.379346 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.90s 2025-05-25 03:53:08.379356 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.61s 2025-05-25 03:53:08.379367 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.61s 2025-05-25 03:53:08.379378 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.43s 2025-05-25 03:53:08.379388 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.41s 2025-05-25 03:53:08.379399 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.36s 2025-05-25 03:53:08.379410 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.28s 2025-05-25 03:53:08.379420 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.26s 2025-05-25 03:53:08.379431 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.21s 2025-05-25 03:53:08.379447 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.11s 2025-05-25 03:53:08.379458 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.03s 2025-05-25 03:53:08.379469 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.02s 2025-05-25 03:53:11.429833 | orchestrator | 2025-05-25 03:53:11 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:11.429948 | orchestrator | 2025-05-25 03:53:11 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:11.429972 | orchestrator | 2025-05-25 03:53:11 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:11.429992 | orchestrator | 2025-05-25 03:53:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:14.464878 | orchestrator | 2025-05-25 03:53:14 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:14.465960 | orchestrator | 2025-05-25 03:53:14 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:14.466680 | orchestrator | 2025-05-25 03:53:14 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:14.466701 | orchestrator | 2025-05-25 03:53:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:17.513546 | orchestrator | 2025-05-25 03:53:17 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:17.513789 | orchestrator | 2025-05-25 03:53:17 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:17.519231 | orchestrator | 2025-05-25 03:53:17 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:17.519293 | orchestrator | 2025-05-25 03:53:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:20.558934 | orchestrator | 2025-05-25 03:53:20 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:20.562283 | orchestrator | 2025-05-25 03:53:20 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:20.564595 | orchestrator | 2025-05-25 03:53:20 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:20.564971 | orchestrator | 2025-05-25 03:53:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:23.607203 | orchestrator | 2025-05-25 03:53:23 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:23.608344 | orchestrator | 2025-05-25 03:53:23 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:23.609772 | orchestrator | 2025-05-25 03:53:23 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:23.609838 | orchestrator | 2025-05-25 03:53:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:26.671149 | orchestrator | 2025-05-25 03:53:26 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:26.671245 | orchestrator | 2025-05-25 03:53:26 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:26.671526 | orchestrator | 2025-05-25 03:53:26 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:26.671554 | orchestrator | 2025-05-25 03:53:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:29.718177 | orchestrator | 2025-05-25 03:53:29 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:29.720197 | orchestrator | 2025-05-25 03:53:29 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:29.723446 | orchestrator | 2025-05-25 03:53:29 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:29.723498 | orchestrator | 2025-05-25 03:53:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:32.768852 | orchestrator | 2025-05-25 03:53:32 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:32.770462 | orchestrator | 2025-05-25 03:53:32 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:32.772991 | orchestrator | 2025-05-25 03:53:32 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:32.773517 | orchestrator | 2025-05-25 03:53:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:35.827907 | orchestrator | 2025-05-25 03:53:35 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:35.829827 | orchestrator | 2025-05-25 03:53:35 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:35.831565 | orchestrator | 2025-05-25 03:53:35 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:35.831628 | orchestrator | 2025-05-25 03:53:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:38.874821 | orchestrator | 2025-05-25 03:53:38 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:38.876386 | orchestrator | 2025-05-25 03:53:38 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:38.877373 | orchestrator | 2025-05-25 03:53:38 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:38.877399 | orchestrator | 2025-05-25 03:53:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:41.925272 | orchestrator | 2025-05-25 03:53:41 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:41.927780 | orchestrator | 2025-05-25 03:53:41 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:41.929305 | orchestrator | 2025-05-25 03:53:41 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:41.929549 | orchestrator | 2025-05-25 03:53:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:44.973459 | orchestrator | 2025-05-25 03:53:44 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:44.974580 | orchestrator | 2025-05-25 03:53:44 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:44.978266 | orchestrator | 2025-05-25 03:53:44 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:44.978321 | orchestrator | 2025-05-25 03:53:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:48.017652 | orchestrator | 2025-05-25 03:53:48 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:48.017767 | orchestrator | 2025-05-25 03:53:48 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:48.018386 | orchestrator | 2025-05-25 03:53:48 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:48.018413 | orchestrator | 2025-05-25 03:53:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:51.063943 | orchestrator | 2025-05-25 03:53:51 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:51.065682 | orchestrator | 2025-05-25 03:53:51 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:51.066986 | orchestrator | 2025-05-25 03:53:51 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:51.067046 | orchestrator | 2025-05-25 03:53:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:54.114212 | orchestrator | 2025-05-25 03:53:54 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:54.116211 | orchestrator | 2025-05-25 03:53:54 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:54.118389 | orchestrator | 2025-05-25 03:53:54 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:54.118419 | orchestrator | 2025-05-25 03:53:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:53:57.178266 | orchestrator | 2025-05-25 03:53:57 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:53:57.180599 | orchestrator | 2025-05-25 03:53:57 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:53:57.181540 | orchestrator | 2025-05-25 03:53:57 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:53:57.181567 | orchestrator | 2025-05-25 03:53:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:00.228535 | orchestrator | 2025-05-25 03:54:00 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:54:00.228766 | orchestrator | 2025-05-25 03:54:00 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:00.230385 | orchestrator | 2025-05-25 03:54:00 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:00.230460 | orchestrator | 2025-05-25 03:54:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:03.274938 | orchestrator | 2025-05-25 03:54:03 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:54:03.275424 | orchestrator | 2025-05-25 03:54:03 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:03.276890 | orchestrator | 2025-05-25 03:54:03 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:03.276923 | orchestrator | 2025-05-25 03:54:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:06.325431 | orchestrator | 2025-05-25 03:54:06 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:54:06.327191 | orchestrator | 2025-05-25 03:54:06 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:06.329734 | orchestrator | 2025-05-25 03:54:06 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:06.329759 | orchestrator | 2025-05-25 03:54:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:09.383433 | orchestrator | 2025-05-25 03:54:09 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:54:09.383784 | orchestrator | 2025-05-25 03:54:09 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:09.384455 | orchestrator | 2025-05-25 03:54:09 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:09.384562 | orchestrator | 2025-05-25 03:54:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:12.437240 | orchestrator | 2025-05-25 03:54:12 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:54:12.438960 | orchestrator | 2025-05-25 03:54:12 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:12.441245 | orchestrator | 2025-05-25 03:54:12 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:12.442375 | orchestrator | 2025-05-25 03:54:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:15.492355 | orchestrator | 2025-05-25 03:54:15 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:54:15.493828 | orchestrator | 2025-05-25 03:54:15 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:15.494681 | orchestrator | 2025-05-25 03:54:15 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:15.494849 | orchestrator | 2025-05-25 03:54:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:18.538396 | orchestrator | 2025-05-25 03:54:18 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state STARTED 2025-05-25 03:54:18.539009 | orchestrator | 2025-05-25 03:54:18 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:18.540018 | orchestrator | 2025-05-25 03:54:18 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:18.540040 | orchestrator | 2025-05-25 03:54:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:21.579951 | orchestrator | 2025-05-25 03:54:21 | INFO  | Task f0db41f2-e75f-4158-81e2-acab8c664db0 is in state SUCCESS 2025-05-25 03:54:21.580976 | orchestrator | 2025-05-25 03:54:21.581132 | orchestrator | 2025-05-25 03:54:21.581152 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 03:54:21.581165 | orchestrator | 2025-05-25 03:54:21.581176 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 03:54:21.581187 | orchestrator | Sunday 25 May 2025 03:51:49 +0000 (0:00:00.170) 0:00:00.170 ************ 2025-05-25 03:54:21.581199 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.581212 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.581251 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.581262 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:54:21.581273 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:54:21.581283 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:54:21.581294 | orchestrator | 2025-05-25 03:54:21.581305 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 03:54:21.581316 | orchestrator | Sunday 25 May 2025 03:51:50 +0000 (0:00:00.666) 0:00:00.836 ************ 2025-05-25 03:54:21.581355 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-25 03:54:21.581369 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-25 03:54:21.581380 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-25 03:54:21.581391 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-25 03:54:21.581402 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-25 03:54:21.581412 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-25 03:54:21.581423 | orchestrator | 2025-05-25 03:54:21.581434 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-25 03:54:21.581445 | orchestrator | 2025-05-25 03:54:21.581455 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-25 03:54:21.581498 | orchestrator | Sunday 25 May 2025 03:51:50 +0000 (0:00:00.736) 0:00:01.572 ************ 2025-05-25 03:54:21.581512 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:54:21.581525 | orchestrator | 2025-05-25 03:54:21.581538 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-25 03:54:21.581550 | orchestrator | Sunday 25 May 2025 03:51:51 +0000 (0:00:01.159) 0:00:02.732 ************ 2025-05-25 03:54:21.581565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.581595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.581609 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.581622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.581635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.581656 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.581669 | orchestrator | 2025-05-25 03:54:21.581696 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-25 03:54:21.581709 | orchestrator | Sunday 25 May 2025 03:51:53 +0000 (0:00:01.650) 0:00:04.382 ************ 2025-05-25 03:54:21.581722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.581740 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.581761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.581781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.581810 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.581831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.581851 | orchestrator | 2025-05-25 03:54:21.581896 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-25 03:54:21.581907 | orchestrator | Sunday 25 May 2025 03:51:55 +0000 (0:00:01.772) 0:00:06.155 ************ 2025-05-25 03:54:21.581918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.581937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.581958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.581969 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.581980 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.581991 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.582002 | orchestrator | 2025-05-25 03:54:21.582013 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-25 03:54:21.582379 | orchestrator | Sunday 25 May 2025 03:51:56 +0000 (0:00:01.196) 0:00:07.351 ************ 2025-05-25 03:54:21.582393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.582411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.582424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.582435 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.582455 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.582467 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.582478 | orchestrator | 2025-05-25 03:54:21.582496 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-25 03:54:21.582508 | orchestrator | Sunday 25 May 2025 03:51:57 +0000 (0:00:01.360) 0:00:08.712 ************ 2025-05-25 03:54:21.582519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.582530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.582542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.582553 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.582569 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.582581 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.582598 | orchestrator | 2025-05-25 03:54:21.582609 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-25 03:54:21.582620 | orchestrator | Sunday 25 May 2025 03:51:59 +0000 (0:00:01.332) 0:00:10.045 ************ 2025-05-25 03:54:21.582632 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:54:21.582643 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:54:21.582653 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:54:21.582664 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:54:21.582676 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:54:21.582686 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:54:21.582697 | orchestrator | 2025-05-25 03:54:21.582708 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-25 03:54:21.582719 | orchestrator | Sunday 25 May 2025 03:52:01 +0000 (0:00:02.309) 0:00:12.355 ************ 2025-05-25 03:54:21.582730 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-25 03:54:21.582741 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-25 03:54:21.582751 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-25 03:54:21.582763 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-25 03:54:21.582783 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-25 03:54:21.582802 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-25 03:54:21.582820 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-25 03:54:21.582837 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-25 03:54:21.582867 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-25 03:54:21.582886 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-25 03:54:21.582905 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-25 03:54:21.582923 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-25 03:54:21.582943 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-25 03:54:21.582966 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-25 03:54:21.582985 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-25 03:54:21.583004 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-25 03:54:21.583015 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-25 03:54:21.583026 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-25 03:54:21.583037 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-25 03:54:21.583048 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-25 03:54:21.583060 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-25 03:54:21.583071 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-25 03:54:21.583113 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-25 03:54:21.583125 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-25 03:54:21.583136 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-25 03:54:21.583146 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-25 03:54:21.583157 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-25 03:54:21.583168 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-25 03:54:21.583185 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-25 03:54:21.583196 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-25 03:54:21.583207 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-25 03:54:21.583218 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-25 03:54:21.583229 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-25 03:54:21.583239 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-25 03:54:21.583250 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-25 03:54:21.583261 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-25 03:54:21.583272 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-25 03:54:21.583283 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-25 03:54:21.583294 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-25 03:54:21.583305 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-25 03:54:21.583316 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-25 03:54:21.583327 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-25 03:54:21.583337 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-25 03:54:21.583349 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-25 03:54:21.583367 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-25 03:54:21.583379 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-25 03:54:21.583390 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-25 03:54:21.583401 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-25 03:54:21.583412 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-25 03:54:21.583423 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-25 03:54:21.583440 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-25 03:54:21.583451 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-25 03:54:21.583462 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-25 03:54:21.583473 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-25 03:54:21.583484 | orchestrator | 2025-05-25 03:54:21.583495 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-25 03:54:21.583506 | orchestrator | Sunday 25 May 2025 03:52:19 +0000 (0:00:18.212) 0:00:30.567 ************ 2025-05-25 03:54:21.583517 | orchestrator | 2025-05-25 03:54:21.583528 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-25 03:54:21.583539 | orchestrator | Sunday 25 May 2025 03:52:19 +0000 (0:00:00.063) 0:00:30.630 ************ 2025-05-25 03:54:21.583550 | orchestrator | 2025-05-25 03:54:21.583560 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-25 03:54:21.583571 | orchestrator | Sunday 25 May 2025 03:52:19 +0000 (0:00:00.061) 0:00:30.692 ************ 2025-05-25 03:54:21.583582 | orchestrator | 2025-05-25 03:54:21.583593 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-25 03:54:21.583604 | orchestrator | Sunday 25 May 2025 03:52:19 +0000 (0:00:00.061) 0:00:30.753 ************ 2025-05-25 03:54:21.583615 | orchestrator | 2025-05-25 03:54:21.583626 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-25 03:54:21.583636 | orchestrator | Sunday 25 May 2025 03:52:19 +0000 (0:00:00.061) 0:00:30.814 ************ 2025-05-25 03:54:21.583647 | orchestrator | 2025-05-25 03:54:21.583743 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-25 03:54:21.583758 | orchestrator | Sunday 25 May 2025 03:52:20 +0000 (0:00:00.062) 0:00:30.877 ************ 2025-05-25 03:54:21.583769 | orchestrator | 2025-05-25 03:54:21.583785 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-25 03:54:21.583796 | orchestrator | Sunday 25 May 2025 03:52:20 +0000 (0:00:00.063) 0:00:30.941 ************ 2025-05-25 03:54:21.583807 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.583818 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:54:21.583829 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:54:21.583839 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.583850 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.583860 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:54:21.583871 | orchestrator | 2025-05-25 03:54:21.583882 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-25 03:54:21.583892 | orchestrator | Sunday 25 May 2025 03:52:21 +0000 (0:00:01.696) 0:00:32.637 ************ 2025-05-25 03:54:21.583903 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:54:21.583914 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:54:21.583924 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:54:21.583935 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:54:21.583946 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:54:21.583956 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:54:21.583967 | orchestrator | 2025-05-25 03:54:21.583978 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-25 03:54:21.583989 | orchestrator | 2025-05-25 03:54:21.584000 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-25 03:54:21.584010 | orchestrator | Sunday 25 May 2025 03:53:03 +0000 (0:00:41.343) 0:01:13.981 ************ 2025-05-25 03:54:21.584021 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:54:21.584032 | orchestrator | 2025-05-25 03:54:21.584043 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-25 03:54:21.584053 | orchestrator | Sunday 25 May 2025 03:53:04 +0000 (0:00:00.866) 0:01:14.847 ************ 2025-05-25 03:54:21.584072 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:54:21.584083 | orchestrator | 2025-05-25 03:54:21.584110 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-25 03:54:21.584121 | orchestrator | Sunday 25 May 2025 03:53:05 +0000 (0:00:01.056) 0:01:15.904 ************ 2025-05-25 03:54:21.584132 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.584143 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.584154 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.584164 | orchestrator | 2025-05-25 03:54:21.584175 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-25 03:54:21.584186 | orchestrator | Sunday 25 May 2025 03:53:06 +0000 (0:00:00.937) 0:01:16.841 ************ 2025-05-25 03:54:21.584197 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.584208 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.584219 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.584236 | orchestrator | 2025-05-25 03:54:21.584247 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-25 03:54:21.584258 | orchestrator | Sunday 25 May 2025 03:53:06 +0000 (0:00:00.407) 0:01:17.249 ************ 2025-05-25 03:54:21.584269 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.584279 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.584290 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.584300 | orchestrator | 2025-05-25 03:54:21.584311 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-25 03:54:21.584322 | orchestrator | Sunday 25 May 2025 03:53:06 +0000 (0:00:00.342) 0:01:17.592 ************ 2025-05-25 03:54:21.584332 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.584343 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.584354 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.584364 | orchestrator | 2025-05-25 03:54:21.584375 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-25 03:54:21.584386 | orchestrator | Sunday 25 May 2025 03:53:07 +0000 (0:00:00.517) 0:01:18.109 ************ 2025-05-25 03:54:21.584396 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.584407 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.584418 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.584428 | orchestrator | 2025-05-25 03:54:21.584439 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-25 03:54:21.584450 | orchestrator | Sunday 25 May 2025 03:53:07 +0000 (0:00:00.379) 0:01:18.489 ************ 2025-05-25 03:54:21.584461 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.584472 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.584482 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.584493 | orchestrator | 2025-05-25 03:54:21.584504 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-25 03:54:21.584514 | orchestrator | Sunday 25 May 2025 03:53:08 +0000 (0:00:00.335) 0:01:18.825 ************ 2025-05-25 03:54:21.584525 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.584536 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.584546 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.584557 | orchestrator | 2025-05-25 03:54:21.584568 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-25 03:54:21.584579 | orchestrator | Sunday 25 May 2025 03:53:08 +0000 (0:00:00.319) 0:01:19.145 ************ 2025-05-25 03:54:21.584589 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.584600 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.584611 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.584621 | orchestrator | 2025-05-25 03:54:21.584632 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-25 03:54:21.584643 | orchestrator | Sunday 25 May 2025 03:53:08 +0000 (0:00:00.477) 0:01:19.622 ************ 2025-05-25 03:54:21.584654 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.584664 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.584689 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.584700 | orchestrator | 2025-05-25 03:54:21.584711 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-25 03:54:21.584722 | orchestrator | Sunday 25 May 2025 03:53:09 +0000 (0:00:00.323) 0:01:19.946 ************ 2025-05-25 03:54:21.584733 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.584743 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.584754 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.584765 | orchestrator | 2025-05-25 03:54:21.584776 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-25 03:54:21.584792 | orchestrator | Sunday 25 May 2025 03:53:09 +0000 (0:00:00.373) 0:01:20.320 ************ 2025-05-25 03:54:21.584803 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.584814 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.584824 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.584835 | orchestrator | 2025-05-25 03:54:21.584846 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-25 03:54:21.584856 | orchestrator | Sunday 25 May 2025 03:53:09 +0000 (0:00:00.346) 0:01:20.666 ************ 2025-05-25 03:54:21.584867 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.584878 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.584888 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.584899 | orchestrator | 2025-05-25 03:54:21.584909 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-25 03:54:21.584920 | orchestrator | Sunday 25 May 2025 03:53:10 +0000 (0:00:00.490) 0:01:21.156 ************ 2025-05-25 03:54:21.584931 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.584942 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.584952 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.584963 | orchestrator | 2025-05-25 03:54:21.584973 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-25 03:54:21.584984 | orchestrator | Sunday 25 May 2025 03:53:10 +0000 (0:00:00.354) 0:01:21.511 ************ 2025-05-25 03:54:21.584995 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.585006 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.585016 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.585027 | orchestrator | 2025-05-25 03:54:21.585038 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-25 03:54:21.585048 | orchestrator | Sunday 25 May 2025 03:53:11 +0000 (0:00:00.323) 0:01:21.835 ************ 2025-05-25 03:54:21.585059 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.585070 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.585080 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.585156 | orchestrator | 2025-05-25 03:54:21.585167 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-25 03:54:21.585178 | orchestrator | Sunday 25 May 2025 03:53:11 +0000 (0:00:00.304) 0:01:22.139 ************ 2025-05-25 03:54:21.585189 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.585199 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.585210 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.585221 | orchestrator | 2025-05-25 03:54:21.585232 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-25 03:54:21.585243 | orchestrator | Sunday 25 May 2025 03:53:11 +0000 (0:00:00.557) 0:01:22.696 ************ 2025-05-25 03:54:21.585254 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.585265 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.585282 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.585293 | orchestrator | 2025-05-25 03:54:21.585304 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-25 03:54:21.585315 | orchestrator | Sunday 25 May 2025 03:53:12 +0000 (0:00:00.446) 0:01:23.143 ************ 2025-05-25 03:54:21.585326 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:54:21.585337 | orchestrator | 2025-05-25 03:54:21.585356 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-25 03:54:21.585367 | orchestrator | Sunday 25 May 2025 03:53:13 +0000 (0:00:00.797) 0:01:23.941 ************ 2025-05-25 03:54:21.585378 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.585388 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.585399 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.585410 | orchestrator | 2025-05-25 03:54:21.585420 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-25 03:54:21.585431 | orchestrator | Sunday 25 May 2025 03:53:14 +0000 (0:00:01.081) 0:01:25.022 ************ 2025-05-25 03:54:21.585442 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.585452 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.585463 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.585473 | orchestrator | 2025-05-25 03:54:21.585484 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-25 03:54:21.585494 | orchestrator | Sunday 25 May 2025 03:53:14 +0000 (0:00:00.441) 0:01:25.463 ************ 2025-05-25 03:54:21.585505 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.585516 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.585526 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.585537 | orchestrator | 2025-05-25 03:54:21.585548 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-25 03:54:21.585558 | orchestrator | Sunday 25 May 2025 03:53:14 +0000 (0:00:00.327) 0:01:25.790 ************ 2025-05-25 03:54:21.585569 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.585579 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.585590 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.585640 | orchestrator | 2025-05-25 03:54:21.585653 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-25 03:54:21.585663 | orchestrator | Sunday 25 May 2025 03:53:15 +0000 (0:00:00.380) 0:01:26.171 ************ 2025-05-25 03:54:21.585672 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.585682 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.585691 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.585701 | orchestrator | 2025-05-25 03:54:21.585710 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-25 03:54:21.585720 | orchestrator | Sunday 25 May 2025 03:53:16 +0000 (0:00:00.791) 0:01:26.962 ************ 2025-05-25 03:54:21.585729 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.585738 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.585748 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.585757 | orchestrator | 2025-05-25 03:54:21.585767 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-25 03:54:21.585776 | orchestrator | Sunday 25 May 2025 03:53:16 +0000 (0:00:00.439) 0:01:27.402 ************ 2025-05-25 03:54:21.585786 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.585795 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.585805 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.585814 | orchestrator | 2025-05-25 03:54:21.585829 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-25 03:54:21.585839 | orchestrator | Sunday 25 May 2025 03:53:16 +0000 (0:00:00.339) 0:01:27.741 ************ 2025-05-25 03:54:21.585848 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.585858 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.585867 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.585877 | orchestrator | 2025-05-25 03:54:21.585886 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-25 03:54:21.585896 | orchestrator | Sunday 25 May 2025 03:53:17 +0000 (0:00:00.378) 0:01:28.120 ************ 2025-05-25 03:54:21.585906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.585929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.585939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.585956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.585969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.585979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.585989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.585999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586062 | orchestrator | 2025-05-25 03:54:21.586073 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-25 03:54:21.586082 | orchestrator | Sunday 25 May 2025 03:53:19 +0000 (0:00:01.764) 0:01:29.884 ************ 2025-05-25 03:54:21.586155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586258 | orchestrator | 2025-05-25 03:54:21.586268 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-25 03:54:21.586278 | orchestrator | Sunday 25 May 2025 03:53:22 +0000 (0:00:03.672) 0:01:33.557 ************ 2025-05-25 03:54:21.586288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.586394 | orchestrator | 2025-05-25 03:54:21.586403 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-25 03:54:21.586413 | orchestrator | Sunday 25 May 2025 03:53:25 +0000 (0:00:02.424) 0:01:35.981 ************ 2025-05-25 03:54:21.586423 | orchestrator | 2025-05-25 03:54:21.586432 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-25 03:54:21.586442 | orchestrator | Sunday 25 May 2025 03:53:25 +0000 (0:00:00.071) 0:01:36.052 ************ 2025-05-25 03:54:21.586451 | orchestrator | 2025-05-25 03:54:21.586461 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-25 03:54:21.586470 | orchestrator | Sunday 25 May 2025 03:53:25 +0000 (0:00:00.066) 0:01:36.119 ************ 2025-05-25 03:54:21.586486 | orchestrator | 2025-05-25 03:54:21.586495 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-25 03:54:21.586505 | orchestrator | Sunday 25 May 2025 03:53:25 +0000 (0:00:00.066) 0:01:36.185 ************ 2025-05-25 03:54:21.586514 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:54:21.586524 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:54:21.586534 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:54:21.586543 | orchestrator | 2025-05-25 03:54:21.586553 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-25 03:54:21.586563 | orchestrator | Sunday 25 May 2025 03:53:31 +0000 (0:00:06.592) 0:01:42.777 ************ 2025-05-25 03:54:21.586572 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:54:21.586586 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:54:21.586596 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:54:21.586605 | orchestrator | 2025-05-25 03:54:21.586615 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-25 03:54:21.586625 | orchestrator | Sunday 25 May 2025 03:53:39 +0000 (0:00:07.580) 0:01:50.358 ************ 2025-05-25 03:54:21.586634 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:54:21.586644 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:54:21.586653 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:54:21.586663 | orchestrator | 2025-05-25 03:54:21.586672 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-25 03:54:21.586682 | orchestrator | Sunday 25 May 2025 03:53:42 +0000 (0:00:02.611) 0:01:52.970 ************ 2025-05-25 03:54:21.586691 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.586701 | orchestrator | 2025-05-25 03:54:21.586710 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-25 03:54:21.586720 | orchestrator | Sunday 25 May 2025 03:53:42 +0000 (0:00:00.117) 0:01:53.088 ************ 2025-05-25 03:54:21.586729 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.586739 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.586748 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.586758 | orchestrator | 2025-05-25 03:54:21.586767 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-25 03:54:21.586777 | orchestrator | Sunday 25 May 2025 03:53:43 +0000 (0:00:00.770) 0:01:53.858 ************ 2025-05-25 03:54:21.586786 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.586796 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.586805 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:54:21.586815 | orchestrator | 2025-05-25 03:54:21.586824 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-25 03:54:21.586834 | orchestrator | Sunday 25 May 2025 03:53:43 +0000 (0:00:00.869) 0:01:54.728 ************ 2025-05-25 03:54:21.586843 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.586853 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.586862 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.586872 | orchestrator | 2025-05-25 03:54:21.586881 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-25 03:54:21.586891 | orchestrator | Sunday 25 May 2025 03:53:44 +0000 (0:00:00.731) 0:01:55.460 ************ 2025-05-25 03:54:21.586900 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.586910 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.586920 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:54:21.586929 | orchestrator | 2025-05-25 03:54:21.586939 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-25 03:54:21.586948 | orchestrator | Sunday 25 May 2025 03:53:45 +0000 (0:00:00.648) 0:01:56.108 ************ 2025-05-25 03:54:21.586958 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.586968 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.586989 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.587008 | orchestrator | 2025-05-25 03:54:21.587025 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-25 03:54:21.587041 | orchestrator | Sunday 25 May 2025 03:53:45 +0000 (0:00:00.705) 0:01:56.813 ************ 2025-05-25 03:54:21.587069 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.587106 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.587121 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.587131 | orchestrator | 2025-05-25 03:54:21.587141 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-25 03:54:21.587150 | orchestrator | Sunday 25 May 2025 03:53:47 +0000 (0:00:01.439) 0:01:58.253 ************ 2025-05-25 03:54:21.587159 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.587169 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.587178 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.587188 | orchestrator | 2025-05-25 03:54:21.587197 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-25 03:54:21.587206 | orchestrator | Sunday 25 May 2025 03:53:47 +0000 (0:00:00.315) 0:01:58.569 ************ 2025-05-25 03:54:21.587216 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587226 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587236 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587246 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587262 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587272 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587282 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587292 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587315 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587325 | orchestrator | 2025-05-25 03:54:21.587335 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-25 03:54:21.587344 | orchestrator | Sunday 25 May 2025 03:53:49 +0000 (0:00:01.467) 0:02:00.036 ************ 2025-05-25 03:54:21.587354 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587364 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587374 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587384 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587418 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587457 | orchestrator | 2025-05-25 03:54:21.587467 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-25 03:54:21.587476 | orchestrator | Sunday 25 May 2025 03:53:53 +0000 (0:00:04.374) 0:02:04.411 ************ 2025-05-25 03:54:21.587491 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587501 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587511 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587540 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587580 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 03:54:21.587590 | orchestrator | 2025-05-25 03:54:21.587600 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-25 03:54:21.587609 | orchestrator | Sunday 25 May 2025 03:53:56 +0000 (0:00:02.926) 0:02:07.337 ************ 2025-05-25 03:54:21.587619 | orchestrator | 2025-05-25 03:54:21.587628 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-25 03:54:21.587638 | orchestrator | Sunday 25 May 2025 03:53:56 +0000 (0:00:00.063) 0:02:07.401 ************ 2025-05-25 03:54:21.587647 | orchestrator | 2025-05-25 03:54:21.587657 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-25 03:54:21.587666 | orchestrator | Sunday 25 May 2025 03:53:56 +0000 (0:00:00.065) 0:02:07.466 ************ 2025-05-25 03:54:21.587675 | orchestrator | 2025-05-25 03:54:21.587685 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-25 03:54:21.587694 | orchestrator | Sunday 25 May 2025 03:53:56 +0000 (0:00:00.062) 0:02:07.529 ************ 2025-05-25 03:54:21.587704 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:54:21.587713 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:54:21.587723 | orchestrator | 2025-05-25 03:54:21.587737 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-25 03:54:21.587747 | orchestrator | Sunday 25 May 2025 03:54:02 +0000 (0:00:06.096) 0:02:13.625 ************ 2025-05-25 03:54:21.587756 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:54:21.587766 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:54:21.587775 | orchestrator | 2025-05-25 03:54:21.587784 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-25 03:54:21.587794 | orchestrator | Sunday 25 May 2025 03:54:08 +0000 (0:00:06.042) 0:02:19.668 ************ 2025-05-25 03:54:21.587803 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:54:21.587812 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:54:21.587822 | orchestrator | 2025-05-25 03:54:21.587831 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-25 03:54:21.587841 | orchestrator | Sunday 25 May 2025 03:54:14 +0000 (0:00:06.078) 0:02:25.747 ************ 2025-05-25 03:54:21.587850 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:54:21.587859 | orchestrator | 2025-05-25 03:54:21.587869 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-25 03:54:21.587878 | orchestrator | Sunday 25 May 2025 03:54:15 +0000 (0:00:00.133) 0:02:25.880 ************ 2025-05-25 03:54:21.587887 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.587897 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.587906 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.587916 | orchestrator | 2025-05-25 03:54:21.587925 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-25 03:54:21.587935 | orchestrator | Sunday 25 May 2025 03:54:16 +0000 (0:00:00.957) 0:02:26.838 ************ 2025-05-25 03:54:21.587944 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.587953 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.587963 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:54:21.587972 | orchestrator | 2025-05-25 03:54:21.587982 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-25 03:54:21.587991 | orchestrator | Sunday 25 May 2025 03:54:16 +0000 (0:00:00.593) 0:02:27.432 ************ 2025-05-25 03:54:21.588001 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.588010 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.588019 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.588029 | orchestrator | 2025-05-25 03:54:21.588041 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-25 03:54:21.588065 | orchestrator | Sunday 25 May 2025 03:54:17 +0000 (0:00:00.817) 0:02:28.249 ************ 2025-05-25 03:54:21.588152 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:54:21.588164 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:54:21.588174 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:54:21.588183 | orchestrator | 2025-05-25 03:54:21.588193 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-25 03:54:21.588202 | orchestrator | Sunday 25 May 2025 03:54:17 +0000 (0:00:00.526) 0:02:28.775 ************ 2025-05-25 03:54:21.588212 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.588221 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.588231 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.588240 | orchestrator | 2025-05-25 03:54:21.588249 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-25 03:54:21.588259 | orchestrator | Sunday 25 May 2025 03:54:18 +0000 (0:00:01.029) 0:02:29.804 ************ 2025-05-25 03:54:21.588268 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:54:21.588277 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:54:21.588287 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:54:21.588296 | orchestrator | 2025-05-25 03:54:21.588306 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:54:21.588316 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-25 03:54:21.588326 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-25 03:54:21.588408 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-25 03:54:21.588432 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:54:21.588442 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:54:21.588452 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 03:54:21.588462 | orchestrator | 2025-05-25 03:54:21.588471 | orchestrator | 2025-05-25 03:54:21.588481 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:54:21.588491 | orchestrator | Sunday 25 May 2025 03:54:19 +0000 (0:00:01.017) 0:02:30.822 ************ 2025-05-25 03:54:21.588500 | orchestrator | =============================================================================== 2025-05-25 03:54:21.588509 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 41.34s 2025-05-25 03:54:21.588519 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.21s 2025-05-25 03:54:21.588528 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.62s 2025-05-25 03:54:21.588538 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 12.69s 2025-05-25 03:54:21.588547 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.69s 2025-05-25 03:54:21.588556 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.37s 2025-05-25 03:54:21.588566 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.67s 2025-05-25 03:54:21.588584 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.93s 2025-05-25 03:54:21.588594 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.42s 2025-05-25 03:54:21.588604 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.31s 2025-05-25 03:54:21.588613 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.77s 2025-05-25 03:54:21.588622 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.76s 2025-05-25 03:54:21.588639 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.70s 2025-05-25 03:54:21.588649 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.65s 2025-05-25 03:54:21.588658 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.47s 2025-05-25 03:54:21.588668 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.44s 2025-05-25 03:54:21.588677 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.36s 2025-05-25 03:54:21.588687 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.33s 2025-05-25 03:54:21.588696 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.20s 2025-05-25 03:54:21.588705 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.16s 2025-05-25 03:54:21.588715 | orchestrator | 2025-05-25 03:54:21 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:21.588725 | orchestrator | 2025-05-25 03:54:21 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:21.588734 | orchestrator | 2025-05-25 03:54:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:24.624574 | orchestrator | 2025-05-25 03:54:24 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:24.626271 | orchestrator | 2025-05-25 03:54:24 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:24.626306 | orchestrator | 2025-05-25 03:54:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:27.679360 | orchestrator | 2025-05-25 03:54:27 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:27.680546 | orchestrator | 2025-05-25 03:54:27 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:27.680589 | orchestrator | 2025-05-25 03:54:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:30.747758 | orchestrator | 2025-05-25 03:54:30 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:30.749362 | orchestrator | 2025-05-25 03:54:30 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:30.749401 | orchestrator | 2025-05-25 03:54:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:33.799731 | orchestrator | 2025-05-25 03:54:33 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:33.802266 | orchestrator | 2025-05-25 03:54:33 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:33.802299 | orchestrator | 2025-05-25 03:54:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:36.859563 | orchestrator | 2025-05-25 03:54:36 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:36.859649 | orchestrator | 2025-05-25 03:54:36 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:36.859665 | orchestrator | 2025-05-25 03:54:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:39.898118 | orchestrator | 2025-05-25 03:54:39 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:39.898323 | orchestrator | 2025-05-25 03:54:39 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:39.898370 | orchestrator | 2025-05-25 03:54:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:42.946609 | orchestrator | 2025-05-25 03:54:42 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:42.947180 | orchestrator | 2025-05-25 03:54:42 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:42.947235 | orchestrator | 2025-05-25 03:54:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:46.008100 | orchestrator | 2025-05-25 03:54:46 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:46.008587 | orchestrator | 2025-05-25 03:54:46 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:46.008619 | orchestrator | 2025-05-25 03:54:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:49.058868 | orchestrator | 2025-05-25 03:54:49 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:49.058955 | orchestrator | 2025-05-25 03:54:49 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:49.058970 | orchestrator | 2025-05-25 03:54:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:52.107683 | orchestrator | 2025-05-25 03:54:52 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:52.117920 | orchestrator | 2025-05-25 03:54:52 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:52.117996 | orchestrator | 2025-05-25 03:54:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:55.190943 | orchestrator | 2025-05-25 03:54:55 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:55.191518 | orchestrator | 2025-05-25 03:54:55 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:55.191549 | orchestrator | 2025-05-25 03:54:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:54:58.234407 | orchestrator | 2025-05-25 03:54:58 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:54:58.234472 | orchestrator | 2025-05-25 03:54:58 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:54:58.234481 | orchestrator | 2025-05-25 03:54:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:01.273204 | orchestrator | 2025-05-25 03:55:01 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:01.277968 | orchestrator | 2025-05-25 03:55:01 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:01.278004 | orchestrator | 2025-05-25 03:55:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:04.322632 | orchestrator | 2025-05-25 03:55:04 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:04.325832 | orchestrator | 2025-05-25 03:55:04 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:04.325889 | orchestrator | 2025-05-25 03:55:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:07.375864 | orchestrator | 2025-05-25 03:55:07 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:07.378165 | orchestrator | 2025-05-25 03:55:07 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:07.378219 | orchestrator | 2025-05-25 03:55:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:10.428343 | orchestrator | 2025-05-25 03:55:10 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:10.429828 | orchestrator | 2025-05-25 03:55:10 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:10.430117 | orchestrator | 2025-05-25 03:55:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:13.486619 | orchestrator | 2025-05-25 03:55:13 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:13.487016 | orchestrator | 2025-05-25 03:55:13 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:13.487046 | orchestrator | 2025-05-25 03:55:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:16.535974 | orchestrator | 2025-05-25 03:55:16 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:16.538994 | orchestrator | 2025-05-25 03:55:16 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:16.539275 | orchestrator | 2025-05-25 03:55:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:19.593749 | orchestrator | 2025-05-25 03:55:19 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:19.597290 | orchestrator | 2025-05-25 03:55:19 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:19.597389 | orchestrator | 2025-05-25 03:55:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:22.642441 | orchestrator | 2025-05-25 03:55:22 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:22.644923 | orchestrator | 2025-05-25 03:55:22 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:22.644983 | orchestrator | 2025-05-25 03:55:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:25.699964 | orchestrator | 2025-05-25 03:55:25 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:25.700131 | orchestrator | 2025-05-25 03:55:25 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:25.700158 | orchestrator | 2025-05-25 03:55:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:28.755412 | orchestrator | 2025-05-25 03:55:28 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:28.756778 | orchestrator | 2025-05-25 03:55:28 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:28.756816 | orchestrator | 2025-05-25 03:55:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:31.818420 | orchestrator | 2025-05-25 03:55:31 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:31.821548 | orchestrator | 2025-05-25 03:55:31 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:31.821602 | orchestrator | 2025-05-25 03:55:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:34.866585 | orchestrator | 2025-05-25 03:55:34 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:34.866684 | orchestrator | 2025-05-25 03:55:34 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:34.866695 | orchestrator | 2025-05-25 03:55:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:37.925172 | orchestrator | 2025-05-25 03:55:37 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:37.926305 | orchestrator | 2025-05-25 03:55:37 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:37.926431 | orchestrator | 2025-05-25 03:55:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:40.972590 | orchestrator | 2025-05-25 03:55:40 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:40.973850 | orchestrator | 2025-05-25 03:55:40 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:40.973881 | orchestrator | 2025-05-25 03:55:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:44.005625 | orchestrator | 2025-05-25 03:55:44 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:44.006126 | orchestrator | 2025-05-25 03:55:44 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:44.006255 | orchestrator | 2025-05-25 03:55:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:47.058358 | orchestrator | 2025-05-25 03:55:47 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:47.058440 | orchestrator | 2025-05-25 03:55:47 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:47.058450 | orchestrator | 2025-05-25 03:55:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:50.105782 | orchestrator | 2025-05-25 03:55:50 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:50.107458 | orchestrator | 2025-05-25 03:55:50 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:50.107502 | orchestrator | 2025-05-25 03:55:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:53.149280 | orchestrator | 2025-05-25 03:55:53 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:53.149385 | orchestrator | 2025-05-25 03:55:53 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:53.149400 | orchestrator | 2025-05-25 03:55:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:56.195123 | orchestrator | 2025-05-25 03:55:56 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:56.195224 | orchestrator | 2025-05-25 03:55:56 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:56.195239 | orchestrator | 2025-05-25 03:55:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:55:59.256509 | orchestrator | 2025-05-25 03:55:59 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:55:59.260130 | orchestrator | 2025-05-25 03:55:59 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:55:59.260209 | orchestrator | 2025-05-25 03:55:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:02.322265 | orchestrator | 2025-05-25 03:56:02 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:02.322384 | orchestrator | 2025-05-25 03:56:02 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:56:02.322402 | orchestrator | 2025-05-25 03:56:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:05.365510 | orchestrator | 2025-05-25 03:56:05 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:05.368002 | orchestrator | 2025-05-25 03:56:05 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:56:05.368111 | orchestrator | 2025-05-25 03:56:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:08.408322 | orchestrator | 2025-05-25 03:56:08 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:08.408425 | orchestrator | 2025-05-25 03:56:08 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:56:08.408441 | orchestrator | 2025-05-25 03:56:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:11.461515 | orchestrator | 2025-05-25 03:56:11 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:11.463014 | orchestrator | 2025-05-25 03:56:11 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:56:11.463220 | orchestrator | 2025-05-25 03:56:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:14.515859 | orchestrator | 2025-05-25 03:56:14 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:14.517418 | orchestrator | 2025-05-25 03:56:14 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:56:14.517529 | orchestrator | 2025-05-25 03:56:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:17.573205 | orchestrator | 2025-05-25 03:56:17 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:17.573366 | orchestrator | 2025-05-25 03:56:17 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:56:17.574354 | orchestrator | 2025-05-25 03:56:17 | INFO  | Task a34f69b1-2551-4b02-ad71-2b11ad4129ac is in state STARTED 2025-05-25 03:56:17.574384 | orchestrator | 2025-05-25 03:56:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:20.631151 | orchestrator | 2025-05-25 03:56:20 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:20.635175 | orchestrator | 2025-05-25 03:56:20 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:56:20.635711 | orchestrator | 2025-05-25 03:56:20 | INFO  | Task a34f69b1-2551-4b02-ad71-2b11ad4129ac is in state STARTED 2025-05-25 03:56:20.636865 | orchestrator | 2025-05-25 03:56:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:23.689595 | orchestrator | 2025-05-25 03:56:23 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:23.691345 | orchestrator | 2025-05-25 03:56:23 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:56:23.692001 | orchestrator | 2025-05-25 03:56:23 | INFO  | Task a34f69b1-2551-4b02-ad71-2b11ad4129ac is in state STARTED 2025-05-25 03:56:23.692070 | orchestrator | 2025-05-25 03:56:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:26.731589 | orchestrator | 2025-05-25 03:56:26 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:26.732306 | orchestrator | 2025-05-25 03:56:26 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:56:26.733781 | orchestrator | 2025-05-25 03:56:26 | INFO  | Task a34f69b1-2551-4b02-ad71-2b11ad4129ac is in state STARTED 2025-05-25 03:56:26.733836 | orchestrator | 2025-05-25 03:56:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:29.772716 | orchestrator | 2025-05-25 03:56:29 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:29.773607 | orchestrator | 2025-05-25 03:56:29 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:56:29.774507 | orchestrator | 2025-05-25 03:56:29 | INFO  | Task a34f69b1-2551-4b02-ad71-2b11ad4129ac is in state STARTED 2025-05-25 03:56:29.774879 | orchestrator | 2025-05-25 03:56:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:32.820401 | orchestrator | 2025-05-25 03:56:32 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:32.822843 | orchestrator | 2025-05-25 03:56:32 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:56:32.825539 | orchestrator | 2025-05-25 03:56:32 | INFO  | Task a34f69b1-2551-4b02-ad71-2b11ad4129ac is in state STARTED 2025-05-25 03:56:32.826177 | orchestrator | 2025-05-25 03:56:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:35.882324 | orchestrator | 2025-05-25 03:56:35 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:35.882559 | orchestrator | 2025-05-25 03:56:35 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:56:35.882594 | orchestrator | 2025-05-25 03:56:35 | INFO  | Task a34f69b1-2551-4b02-ad71-2b11ad4129ac is in state SUCCESS 2025-05-25 03:56:35.882777 | orchestrator | 2025-05-25 03:56:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:38.921630 | orchestrator | 2025-05-25 03:56:38 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:38.925317 | orchestrator | 2025-05-25 03:56:38 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:56:38.925417 | orchestrator | 2025-05-25 03:56:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:41.983899 | orchestrator | 2025-05-25 03:56:41 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:41.984908 | orchestrator | 2025-05-25 03:56:41 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:56:41.984949 | orchestrator | 2025-05-25 03:56:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:45.036532 | orchestrator | 2025-05-25 03:56:45 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:45.040797 | orchestrator | 2025-05-25 03:56:45 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:56:45.040883 | orchestrator | 2025-05-25 03:56:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:48.084938 | orchestrator | 2025-05-25 03:56:48 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:48.085290 | orchestrator | 2025-05-25 03:56:48 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:56:48.085312 | orchestrator | 2025-05-25 03:56:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:51.132331 | orchestrator | 2025-05-25 03:56:51 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:51.133624 | orchestrator | 2025-05-25 03:56:51 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state STARTED 2025-05-25 03:56:51.134986 | orchestrator | 2025-05-25 03:56:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:54.177282 | orchestrator | 2025-05-25 03:56:54 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:56:54.178559 | orchestrator | 2025-05-25 03:56:54 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:54.191784 | orchestrator | 2025-05-25 03:56:54 | INFO  | Task a839d0dc-b1b8-464e-b4ab-1e7908546cf0 is in state SUCCESS 2025-05-25 03:56:54.194887 | orchestrator | 2025-05-25 03:56:54.195334 | orchestrator | None 2025-05-25 03:56:54.195354 | orchestrator | 2025-05-25 03:56:54.195366 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 03:56:54.195378 | orchestrator | 2025-05-25 03:56:54.195390 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 03:56:54.195401 | orchestrator | Sunday 25 May 2025 03:50:38 +0000 (0:00:00.198) 0:00:00.198 ************ 2025-05-25 03:56:54.195413 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:56:54.195425 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:56:54.195436 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:56:54.195447 | orchestrator | 2025-05-25 03:56:54.195459 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 03:56:54.195470 | orchestrator | Sunday 25 May 2025 03:50:39 +0000 (0:00:00.244) 0:00:00.443 ************ 2025-05-25 03:56:54.195481 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-25 03:56:54.195492 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-25 03:56:54.195529 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-25 03:56:54.195541 | orchestrator | 2025-05-25 03:56:54.195552 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-25 03:56:54.195563 | orchestrator | 2025-05-25 03:56:54.195574 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-25 03:56:54.195585 | orchestrator | Sunday 25 May 2025 03:50:39 +0000 (0:00:00.341) 0:00:00.784 ************ 2025-05-25 03:56:54.195596 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.195607 | orchestrator | 2025-05-25 03:56:54.195619 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-25 03:56:54.195630 | orchestrator | Sunday 25 May 2025 03:50:40 +0000 (0:00:00.815) 0:00:01.600 ************ 2025-05-25 03:56:54.195641 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:56:54.195652 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:56:54.195663 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:56:54.195674 | orchestrator | 2025-05-25 03:56:54.195685 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-25 03:56:54.195696 | orchestrator | Sunday 25 May 2025 03:50:41 +0000 (0:00:00.955) 0:00:02.555 ************ 2025-05-25 03:56:54.195707 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.195718 | orchestrator | 2025-05-25 03:56:54.195729 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-25 03:56:54.195740 | orchestrator | Sunday 25 May 2025 03:50:42 +0000 (0:00:01.365) 0:00:03.921 ************ 2025-05-25 03:56:54.195751 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:56:54.195762 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:56:54.195772 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:56:54.195783 | orchestrator | 2025-05-25 03:56:54.195794 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-25 03:56:54.195805 | orchestrator | Sunday 25 May 2025 03:50:43 +0000 (0:00:00.851) 0:00:04.773 ************ 2025-05-25 03:56:54.195816 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-25 03:56:54.195827 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-25 03:56:54.195838 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-25 03:56:54.195849 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-25 03:56:54.195860 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-25 03:56:54.195871 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-25 03:56:54.195881 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-25 03:56:54.195894 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-25 03:56:54.195905 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-25 03:56:54.195918 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-25 03:56:54.195930 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-25 03:56:54.195943 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-25 03:56:54.195955 | orchestrator | 2025-05-25 03:56:54.195968 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-25 03:56:54.195980 | orchestrator | Sunday 25 May 2025 03:50:46 +0000 (0:00:03.228) 0:00:08.001 ************ 2025-05-25 03:56:54.195992 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-25 03:56:54.196076 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-25 03:56:54.196100 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-25 03:56:54.196113 | orchestrator | 2025-05-25 03:56:54.196126 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-25 03:56:54.196138 | orchestrator | Sunday 25 May 2025 03:50:47 +0000 (0:00:01.034) 0:00:09.035 ************ 2025-05-25 03:56:54.196151 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-25 03:56:54.196178 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-25 03:56:54.196190 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-25 03:56:54.196204 | orchestrator | 2025-05-25 03:56:54.196216 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-25 03:56:54.196229 | orchestrator | Sunday 25 May 2025 03:50:49 +0000 (0:00:01.529) 0:00:10.565 ************ 2025-05-25 03:56:54.196242 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-25 03:56:54.196255 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.196282 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-25 03:56:54.196294 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.196305 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-25 03:56:54.196316 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.196327 | orchestrator | 2025-05-25 03:56:54.196339 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-25 03:56:54.196350 | orchestrator | Sunday 25 May 2025 03:50:49 +0000 (0:00:00.731) 0:00:11.296 ************ 2025-05-25 03:56:54.196365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-25 03:56:54.196564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-25 03:56:54.196583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-25 03:56:54.196595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 03:56:54.196614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 03:56:54.196640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 03:56:54.196652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 03:56:54.198295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 03:56:54.198376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 03:56:54.198394 | orchestrator | 2025-05-25 03:56:54.198418 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-25 03:56:54.198437 | orchestrator | Sunday 25 May 2025 03:50:52 +0000 (0:00:02.233) 0:00:13.529 ************ 2025-05-25 03:56:54.198468 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.198489 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.198507 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.198524 | orchestrator | 2025-05-25 03:56:54.198544 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-25 03:56:54.198563 | orchestrator | Sunday 25 May 2025 03:50:53 +0000 (0:00:01.453) 0:00:14.983 ************ 2025-05-25 03:56:54.198583 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-25 03:56:54.198603 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-25 03:56:54.198622 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-25 03:56:54.198642 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-25 03:56:54.198653 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-25 03:56:54.198694 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-25 03:56:54.198706 | orchestrator | 2025-05-25 03:56:54.198717 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-25 03:56:54.198728 | orchestrator | Sunday 25 May 2025 03:50:55 +0000 (0:00:02.137) 0:00:17.120 ************ 2025-05-25 03:56:54.198739 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.198749 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.198760 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.198771 | orchestrator | 2025-05-25 03:56:54.198782 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-25 03:56:54.198793 | orchestrator | Sunday 25 May 2025 03:50:57 +0000 (0:00:01.692) 0:00:18.812 ************ 2025-05-25 03:56:54.198804 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:56:54.198815 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:56:54.198825 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:56:54.198836 | orchestrator | 2025-05-25 03:56:54.198847 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-25 03:56:54.198857 | orchestrator | Sunday 25 May 2025 03:50:58 +0000 (0:00:01.283) 0:00:20.096 ************ 2025-05-25 03:56:54.198870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.198918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.198931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.198943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e2087bd7b5a5539772493fa29bb4aff9085af7db', '__omit_place_holder__e2087bd7b5a5539772493fa29bb4aff9085af7db'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 03:56:54.198955 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.198967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.198987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.198998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.199046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e2087bd7b5a5539772493fa29bb4aff9085af7db', '__omit_place_holder__e2087bd7b5a5539772493fa29bb4aff9085af7db'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 03:56:54.199058 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.199079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.199091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.199102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.199120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e2087bd7b5a5539772493fa29bb4aff9085af7db', '__omit_place_holder__e2087bd7b5a5539772493fa29bb4aff9085af7db'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 03:56:54.199132 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.199143 | orchestrator | 2025-05-25 03:56:54.199154 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-25 03:56:54.199165 | orchestrator | Sunday 25 May 2025 03:50:59 +0000 (0:00:00.512) 0:00:20.608 ************ 2025-05-25 03:56:54.199176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-25 03:56:54.199192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-25 03:56:54.199227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-25 03:56:54.199239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 03:56:54.199251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.199272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e2087bd7b5a5539772493fa29bb4aff9085af7db', '__omit_place_holder__e2087bd7b5a5539772493fa29bb4aff9085af7db'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 03:56:54.199284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 03:56:54.199295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.199312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e2087bd7b5a5539772493fa29bb4aff9085af7db', '__omit_place_holder__e2087bd7b5a5539772493fa29bb4aff9085af7db'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 03:56:54.199333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 03:56:54.199345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.199363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e2087bd7b5a5539772493fa29bb4aff9085af7db', '__omit_place_holder__e2087bd7b5a5539772493fa29bb4aff9085af7db'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 03:56:54.199375 | orchestrator | 2025-05-25 03:56:54.199385 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-25 03:56:54.199396 | orchestrator | Sunday 25 May 2025 03:51:03 +0000 (0:00:04.347) 0:00:24.956 ************ 2025-05-25 03:56:54.199408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-25 03:56:54.199419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-25 03:56:54.199435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-25 03:56:54.199453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 03:56:54.199465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 03:56:54.199483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 03:56:54.199494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 03:56:54.199506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 03:56:54.199517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 03:56:54.199528 | orchestrator | 2025-05-25 03:56:54.199539 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-25 03:56:54.199550 | orchestrator | Sunday 25 May 2025 03:51:06 +0000 (0:00:03.355) 0:00:28.311 ************ 2025-05-25 03:56:54.199561 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-25 03:56:54.199573 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-25 03:56:54.199584 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-25 03:56:54.199594 | orchestrator | 2025-05-25 03:56:54.199605 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-25 03:56:54.199621 | orchestrator | Sunday 25 May 2025 03:51:08 +0000 (0:00:01.684) 0:00:29.995 ************ 2025-05-25 03:56:54.199632 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-25 03:56:54.199643 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-25 03:56:54.199659 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-25 03:56:54.199670 | orchestrator | 2025-05-25 03:56:54.199681 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-25 03:56:54.199691 | orchestrator | Sunday 25 May 2025 03:51:12 +0000 (0:00:03.733) 0:00:33.729 ************ 2025-05-25 03:56:54.199709 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.199719 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.199730 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.199741 | orchestrator | 2025-05-25 03:56:54.199751 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-25 03:56:54.199762 | orchestrator | Sunday 25 May 2025 03:51:13 +0000 (0:00:00.761) 0:00:34.490 ************ 2025-05-25 03:56:54.199773 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-25 03:56:54.199784 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-25 03:56:54.199795 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-25 03:56:54.199805 | orchestrator | 2025-05-25 03:56:54.199816 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-25 03:56:54.199827 | orchestrator | Sunday 25 May 2025 03:51:17 +0000 (0:00:04.138) 0:00:38.629 ************ 2025-05-25 03:56:54.199837 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-25 03:56:54.199848 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-25 03:56:54.199858 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-25 03:56:54.199869 | orchestrator | 2025-05-25 03:56:54.199880 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-25 03:56:54.199890 | orchestrator | Sunday 25 May 2025 03:51:18 +0000 (0:00:01.758) 0:00:40.388 ************ 2025-05-25 03:56:54.199901 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-25 03:56:54.199912 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-25 03:56:54.199922 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-25 03:56:54.199933 | orchestrator | 2025-05-25 03:56:54.199944 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-25 03:56:54.199954 | orchestrator | Sunday 25 May 2025 03:51:20 +0000 (0:00:01.445) 0:00:41.833 ************ 2025-05-25 03:56:54.199965 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-25 03:56:54.199976 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-25 03:56:54.199986 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-25 03:56:54.199997 | orchestrator | 2025-05-25 03:56:54.200045 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-25 03:56:54.200057 | orchestrator | Sunday 25 May 2025 03:51:21 +0000 (0:00:01.525) 0:00:43.359 ************ 2025-05-25 03:56:54.200067 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.200078 | orchestrator | 2025-05-25 03:56:54.200088 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-25 03:56:54.200099 | orchestrator | Sunday 25 May 2025 03:51:22 +0000 (0:00:00.873) 0:00:44.233 ************ 2025-05-25 03:56:54.200110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-25 03:56:54.200127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-25 03:56:54.200158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-25 03:56:54.200170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 03:56:54.200181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 03:56:54.200193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 03:56:54.200204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 03:56:54.200215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 03:56:54.200233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 03:56:54.200245 | orchestrator | 2025-05-25 03:56:54.200261 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-25 03:56:54.200272 | orchestrator | Sunday 25 May 2025 03:51:26 +0000 (0:00:03.623) 0:00:47.857 ************ 2025-05-25 03:56:54.200291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.200303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.200314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.200325 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.200336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.200348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.200365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.200377 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.200393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.200411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.200423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.200434 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.200445 | orchestrator | 2025-05-25 03:56:54.200456 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-25 03:56:54.200466 | orchestrator | Sunday 25 May 2025 03:51:27 +0000 (0:00:00.801) 0:00:48.658 ************ 2025-05-25 03:56:54.200477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.200489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.200506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.200518 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.200529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.200611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.200634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.200646 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.200657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.200669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.200688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.200699 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.200710 | orchestrator | 2025-05-25 03:56:54.200721 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-05-25 03:56:54.200732 | orchestrator | Sunday 25 May 2025 03:51:28 +0000 (0:00:01.326) 0:00:49.984 ************ 2025-05-25 03:56:54.200743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.200767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.200780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.200791 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.200802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.200814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.200831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.200843 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.200854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.200865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.200887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.200899 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.200910 | orchestrator | 2025-05-25 03:56:54.200921 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-05-25 03:56:54.200932 | orchestrator | Sunday 25 May 2025 03:51:29 +0000 (0:00:00.583) 0:00:50.568 ************ 2025-05-25 03:56:54.200943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.200955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.200976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.200996 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.201052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.201073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.201099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.201117 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.201146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.201167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.201186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.201219 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.201238 | orchestrator | 2025-05-25 03:56:54.201255 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-05-25 03:56:54.201275 | orchestrator | Sunday 25 May 2025 03:51:29 +0000 (0:00:00.578) 0:00:51.146 ************ 2025-05-25 03:56:54.201293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.201311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.201330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.201349 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.201379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.201391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.201403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.201422 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.201433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.201444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.201457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.201476 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.201505 | orchestrator | 2025-05-25 03:56:54.201525 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-05-25 03:56:54.201543 | orchestrator | Sunday 25 May 2025 03:51:30 +0000 (0:00:01.259) 0:00:52.405 ************ 2025-05-25 03:56:54.201570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.201602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.201623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.201653 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.201671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.201690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.201709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.201728 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.201748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.201784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.201813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.201848 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.201866 | orchestrator | 2025-05-25 03:56:54.201884 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-05-25 03:56:54.201903 | orchestrator | Sunday 25 May 2025 03:51:31 +0000 (0:00:00.786) 0:00:53.192 ************ 2025-05-25 03:56:54.201923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.201942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.201960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.201972 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.201983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.202170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.202229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.202254 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.202267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.202278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.202290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.202301 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.202312 | orchestrator | 2025-05-25 03:56:54.202323 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-05-25 03:56:54.202334 | orchestrator | Sunday 25 May 2025 03:51:32 +0000 (0:00:00.697) 0:00:53.890 ************ 2025-05-25 03:56:54.202345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.202357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.202375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.202393 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.202413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.202425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.202436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.202447 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.202458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-25 03:56:54.202469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 03:56:54.202480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 03:56:54.202498 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.202509 | orchestrator | 2025-05-25 03:56:54.202520 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-25 03:56:54.202536 | orchestrator | Sunday 25 May 2025 03:51:33 +0000 (0:00:01.329) 0:00:55.219 ************ 2025-05-25 03:56:54.202547 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-25 03:56:54.202558 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-25 03:56:54.202586 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-25 03:56:54.202598 | orchestrator | 2025-05-25 03:56:54.202609 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-25 03:56:54.202619 | orchestrator | Sunday 25 May 2025 03:51:35 +0000 (0:00:01.523) 0:00:56.743 ************ 2025-05-25 03:56:54.202630 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-25 03:56:54.202641 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-25 03:56:54.202652 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-25 03:56:54.202663 | orchestrator | 2025-05-25 03:56:54.202673 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-25 03:56:54.202684 | orchestrator | Sunday 25 May 2025 03:51:37 +0000 (0:00:01.906) 0:00:58.649 ************ 2025-05-25 03:56:54.202695 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-25 03:56:54.202706 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-25 03:56:54.202717 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-25 03:56:54.202728 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-25 03:56:54.202757 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.202778 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-25 03:56:54.202790 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.202801 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-25 03:56:54.202812 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.202823 | orchestrator | 2025-05-25 03:56:54.202833 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-25 03:56:54.202844 | orchestrator | Sunday 25 May 2025 03:51:39 +0000 (0:00:01.984) 0:01:00.634 ************ 2025-05-25 03:56:54.202856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-25 03:56:54.202868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-25 03:56:54.202886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-25 03:56:54.202912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 03:56:54.202924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 03:56:54.202935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 03:56:54.202946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 03:56:54.202958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 03:56:54.202969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 03:56:54.202988 | orchestrator | 2025-05-25 03:56:54.202999 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-25 03:56:54.203031 | orchestrator | Sunday 25 May 2025 03:51:41 +0000 (0:00:02.495) 0:01:03.130 ************ 2025-05-25 03:56:54.203042 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.203053 | orchestrator | 2025-05-25 03:56:54.203064 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-25 03:56:54.203075 | orchestrator | Sunday 25 May 2025 03:51:42 +0000 (0:00:00.731) 0:01:03.861 ************ 2025-05-25 03:56:54.203097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-25 03:56:54.203110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.203122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-25 03:56:54.203164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.203180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-25 03:56:54.203199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.203234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203263 | orchestrator | 2025-05-25 03:56:54.203274 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-25 03:56:54.203285 | orchestrator | Sunday 25 May 2025 03:51:45 +0000 (0:00:03.178) 0:01:07.040 ************ 2025-05-25 03:56:54.203302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-25 03:56:54.203321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.203333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203355 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.203367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-25 03:56:54.203385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.203396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203427 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.203445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-25 03:56:54.203458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.203469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203498 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.203509 | orchestrator | 2025-05-25 03:56:54.203519 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-25 03:56:54.203530 | orchestrator | Sunday 25 May 2025 03:51:46 +0000 (0:00:00.776) 0:01:07.816 ************ 2025-05-25 03:56:54.203542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-25 03:56:54.203555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-25 03:56:54.203566 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.203577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-25 03:56:54.203588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-25 03:56:54.203599 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.203615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-25 03:56:54.203627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-25 03:56:54.203638 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.203649 | orchestrator | 2025-05-25 03:56:54.203665 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-25 03:56:54.203677 | orchestrator | Sunday 25 May 2025 03:51:47 +0000 (0:00:01.087) 0:01:08.904 ************ 2025-05-25 03:56:54.203687 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.203698 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.203709 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.203720 | orchestrator | 2025-05-25 03:56:54.203730 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-25 03:56:54.203741 | orchestrator | Sunday 25 May 2025 03:51:48 +0000 (0:00:01.301) 0:01:10.205 ************ 2025-05-25 03:56:54.203752 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.203763 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.203774 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.203785 | orchestrator | 2025-05-25 03:56:54.203795 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-25 03:56:54.203806 | orchestrator | Sunday 25 May 2025 03:51:50 +0000 (0:00:01.782) 0:01:11.987 ************ 2025-05-25 03:56:54.203823 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.203834 | orchestrator | 2025-05-25 03:56:54.203845 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-25 03:56:54.203855 | orchestrator | Sunday 25 May 2025 03:51:51 +0000 (0:00:00.564) 0:01:12.552 ************ 2025-05-25 03:56:54.203867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.203879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.203926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.203945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.203990 | orchestrator | 2025-05-25 03:56:54.204001 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-25 03:56:54.204031 | orchestrator | Sunday 25 May 2025 03:51:55 +0000 (0:00:04.312) 0:01:16.864 ************ 2025-05-25 03:56:54.204054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.204072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.204084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.204096 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.204107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.204119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.204134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.204146 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.204163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.204182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.204193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.204205 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.204216 | orchestrator | 2025-05-25 03:56:54.204227 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-25 03:56:54.204238 | orchestrator | Sunday 25 May 2025 03:51:56 +0000 (0:00:00.647) 0:01:17.512 ************ 2025-05-25 03:56:54.204249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-25 03:56:54.204261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-25 03:56:54.204273 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.204284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-25 03:56:54.204295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-25 03:56:54.204306 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.204317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-25 03:56:54.204328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-25 03:56:54.204339 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.204350 | orchestrator | 2025-05-25 03:56:54.204360 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-25 03:56:54.204378 | orchestrator | Sunday 25 May 2025 03:51:56 +0000 (0:00:00.767) 0:01:18.279 ************ 2025-05-25 03:56:54.204389 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.204400 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.204411 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.204422 | orchestrator | 2025-05-25 03:56:54.204433 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-25 03:56:54.204443 | orchestrator | Sunday 25 May 2025 03:51:58 +0000 (0:00:01.664) 0:01:19.944 ************ 2025-05-25 03:56:54.204454 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.204465 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.204476 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.204486 | orchestrator | 2025-05-25 03:56:54.204503 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-25 03:56:54.204514 | orchestrator | Sunday 25 May 2025 03:52:00 +0000 (0:00:01.893) 0:01:21.837 ************ 2025-05-25 03:56:54.204525 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.204536 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.204546 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.204557 | orchestrator | 2025-05-25 03:56:54.204568 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-25 03:56:54.204632 | orchestrator | Sunday 25 May 2025 03:52:00 +0000 (0:00:00.301) 0:01:22.139 ************ 2025-05-25 03:56:54.204650 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.204662 | orchestrator | 2025-05-25 03:56:54.204673 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-25 03:56:54.204684 | orchestrator | Sunday 25 May 2025 03:52:01 +0000 (0:00:00.694) 0:01:22.834 ************ 2025-05-25 03:56:54.204695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-25 03:56:54.204708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-25 03:56:54.204720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-25 03:56:54.204738 | orchestrator | 2025-05-25 03:56:54.204749 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-25 03:56:54.204760 | orchestrator | Sunday 25 May 2025 03:52:05 +0000 (0:00:04.425) 0:01:27.260 ************ 2025-05-25 03:56:54.204785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-25 03:56:54.204806 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.204825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-25 03:56:54.204845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-25 03:56:54.204865 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.204884 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.204904 | orchestrator | 2025-05-25 03:56:54.204921 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-25 03:56:54.204932 | orchestrator | Sunday 25 May 2025 03:52:07 +0000 (0:00:01.453) 0:01:28.714 ************ 2025-05-25 03:56:54.204961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-25 03:56:54.204994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-25 03:56:54.205034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-25 03:56:54.205054 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.205082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-25 03:56:54.205102 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.205124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-25 03:56:54.205136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-25 03:56:54.205148 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.205158 | orchestrator | 2025-05-25 03:56:54.205170 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-25 03:56:54.205181 | orchestrator | Sunday 25 May 2025 03:52:09 +0000 (0:00:01.896) 0:01:30.611 ************ 2025-05-25 03:56:54.205192 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.205203 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.205214 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.205225 | orchestrator | 2025-05-25 03:56:54.205236 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-25 03:56:54.205247 | orchestrator | Sunday 25 May 2025 03:52:10 +0000 (0:00:00.869) 0:01:31.480 ************ 2025-05-25 03:56:54.205257 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.205268 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.205279 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.205289 | orchestrator | 2025-05-25 03:56:54.205300 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-25 03:56:54.205311 | orchestrator | Sunday 25 May 2025 03:52:11 +0000 (0:00:01.021) 0:01:32.502 ************ 2025-05-25 03:56:54.205322 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.205333 | orchestrator | 2025-05-25 03:56:54.205344 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-25 03:56:54.205354 | orchestrator | Sunday 25 May 2025 03:52:12 +0000 (0:00:00.918) 0:01:33.421 ************ 2025-05-25 03:56:54.205365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.205386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.205403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.205423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.205435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.205447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.205465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.205477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.205500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.205512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.205523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.205541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.205553 | orchestrator | 2025-05-25 03:56:54.205564 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-25 03:56:54.205575 | orchestrator | Sunday 25 May 2025 03:52:15 +0000 (0:00:03.938) 0:01:37.359 ************ 2025-05-25 03:56:54.205586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.205602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.205621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.205632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.205650 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.205662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.205673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.205689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.206658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.206693 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.206706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.206731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.206743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.206755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.206766 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.206778 | orchestrator | 2025-05-25 03:56:54.206789 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-25 03:56:54.206813 | orchestrator | Sunday 25 May 2025 03:52:17 +0000 (0:00:01.193) 0:01:38.553 ************ 2025-05-25 03:56:54.206825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-25 03:56:54.206918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-25 03:56:54.206936 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.206948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-25 03:56:54.206959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-25 03:56:54.206980 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.206991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-25 03:56:54.207027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-25 03:56:54.207041 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.207099 | orchestrator | 2025-05-25 03:56:54.207117 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-25 03:56:54.207135 | orchestrator | Sunday 25 May 2025 03:52:18 +0000 (0:00:00.922) 0:01:39.476 ************ 2025-05-25 03:56:54.207514 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.207529 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.207540 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.207551 | orchestrator | 2025-05-25 03:56:54.207562 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-25 03:56:54.207573 | orchestrator | Sunday 25 May 2025 03:52:19 +0000 (0:00:01.238) 0:01:40.715 ************ 2025-05-25 03:56:54.207584 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.207595 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.207606 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.207617 | orchestrator | 2025-05-25 03:56:54.207627 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-25 03:56:54.207638 | orchestrator | Sunday 25 May 2025 03:52:21 +0000 (0:00:02.058) 0:01:42.773 ************ 2025-05-25 03:56:54.207649 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.207659 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.207670 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.207681 | orchestrator | 2025-05-25 03:56:54.207692 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-25 03:56:54.207703 | orchestrator | Sunday 25 May 2025 03:52:21 +0000 (0:00:00.546) 0:01:43.320 ************ 2025-05-25 03:56:54.207713 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.207724 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.207735 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.207746 | orchestrator | 2025-05-25 03:56:54.207757 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-25 03:56:54.207767 | orchestrator | Sunday 25 May 2025 03:52:22 +0000 (0:00:00.428) 0:01:43.748 ************ 2025-05-25 03:56:54.207778 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.207789 | orchestrator | 2025-05-25 03:56:54.207800 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-25 03:56:54.207811 | orchestrator | Sunday 25 May 2025 03:52:23 +0000 (0:00:00.915) 0:01:44.664 ************ 2025-05-25 03:56:54.207823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 03:56:54.208487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 03:56:54.208527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.208540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.208552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.208563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.208575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.208592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 03:56:54.208698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 03:56:54.208715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.208727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.208738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 03:56:54.210276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 03:56:54.210286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210362 | orchestrator | 2025-05-25 03:56:54.210505 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-25 03:56:54.210516 | orchestrator | Sunday 25 May 2025 03:52:27 +0000 (0:00:04.475) 0:01:49.140 ************ 2025-05-25 03:56:54.210559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 03:56:54.210571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 03:56:54.210581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210653 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.210663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 03:56:54.210672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 03:56:54.210681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 03:56:54.210696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 03:56:54.210734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210789 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.210802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.210836 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.210845 | orchestrator | 2025-05-25 03:56:54.210854 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-25 03:56:54.210863 | orchestrator | Sunday 25 May 2025 03:52:28 +0000 (0:00:01.150) 0:01:50.291 ************ 2025-05-25 03:56:54.210873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-25 03:56:54.210882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-25 03:56:54.210892 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.210901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-25 03:56:54.210909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-25 03:56:54.210923 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.210933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-25 03:56:54.210948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-25 03:56:54.210981 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.210997 | orchestrator | 2025-05-25 03:56:54.211048 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-25 03:56:54.211064 | orchestrator | Sunday 25 May 2025 03:52:30 +0000 (0:00:01.373) 0:01:51.665 ************ 2025-05-25 03:56:54.211114 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.211124 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.211133 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.211142 | orchestrator | 2025-05-25 03:56:54.211150 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-25 03:56:54.211159 | orchestrator | Sunday 25 May 2025 03:52:32 +0000 (0:00:01.942) 0:01:53.607 ************ 2025-05-25 03:56:54.211168 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.211176 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.211185 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.211194 | orchestrator | 2025-05-25 03:56:54.211202 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-25 03:56:54.211211 | orchestrator | Sunday 25 May 2025 03:52:34 +0000 (0:00:01.899) 0:01:55.506 ************ 2025-05-25 03:56:54.211220 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.211228 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.211237 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.211246 | orchestrator | 2025-05-25 03:56:54.211294 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-25 03:56:54.211303 | orchestrator | Sunday 25 May 2025 03:52:34 +0000 (0:00:00.311) 0:01:55.817 ************ 2025-05-25 03:56:54.211311 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.211320 | orchestrator | 2025-05-25 03:56:54.211329 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-25 03:56:54.211342 | orchestrator | Sunday 25 May 2025 03:52:35 +0000 (0:00:00.885) 0:01:56.703 ************ 2025-05-25 03:56:54.211386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 03:56:54.211408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-25 03:56:54.211430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 03:56:54.211441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-25 03:56:54.211467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 03:56:54.211478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-25 03:56:54.211493 | orchestrator | 2025-05-25 03:56:54.211502 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-25 03:56:54.211511 | orchestrator | Sunday 25 May 2025 03:52:39 +0000 (0:00:04.292) 0:02:00.995 ************ 2025-05-25 03:56:54.211529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-25 03:56:54.211540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-25 03:56:54.211555 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.211565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-25 03:56:54.211586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-25 03:56:54.211600 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.211610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-25 03:56:54.211629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-25 03:56:54.211645 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.211654 | orchestrator | 2025-05-25 03:56:54.211662 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-25 03:56:54.211671 | orchestrator | Sunday 25 May 2025 03:52:42 +0000 (0:00:02.806) 0:02:03.802 ************ 2025-05-25 03:56:54.211680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-25 03:56:54.211691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-25 03:56:54.211700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-25 03:56:54.211709 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.211719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-25 03:56:54.211727 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.211737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-25 03:56:54.211752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-25 03:56:54.211770 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.211779 | orchestrator | 2025-05-25 03:56:54.211787 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-25 03:56:54.211796 | orchestrator | Sunday 25 May 2025 03:52:45 +0000 (0:00:03.268) 0:02:07.070 ************ 2025-05-25 03:56:54.211805 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.211814 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.211822 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.211831 | orchestrator | 2025-05-25 03:56:54.211839 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-25 03:56:54.211848 | orchestrator | Sunday 25 May 2025 03:52:47 +0000 (0:00:01.430) 0:02:08.501 ************ 2025-05-25 03:56:54.211857 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.211865 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.211874 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.211882 | orchestrator | 2025-05-25 03:56:54.211891 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-25 03:56:54.211899 | orchestrator | Sunday 25 May 2025 03:52:49 +0000 (0:00:02.073) 0:02:10.574 ************ 2025-05-25 03:56:54.211908 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.211917 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.211925 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.211934 | orchestrator | 2025-05-25 03:56:54.211943 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-25 03:56:54.211951 | orchestrator | Sunday 25 May 2025 03:52:49 +0000 (0:00:00.369) 0:02:10.944 ************ 2025-05-25 03:56:54.211960 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.211968 | orchestrator | 2025-05-25 03:56:54.211977 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-25 03:56:54.211985 | orchestrator | Sunday 25 May 2025 03:52:50 +0000 (0:00:00.649) 0:02:11.594 ************ 2025-05-25 03:56:54.212218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 03:56:54.212245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 03:56:54.212255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 03:56:54.212271 | orchestrator | 2025-05-25 03:56:54.212280 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-25 03:56:54.212293 | orchestrator | Sunday 25 May 2025 03:52:53 +0000 (0:00:03.375) 0:02:14.969 ************ 2025-05-25 03:56:54.212310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 03:56:54.212320 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.212329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 03:56:54.212338 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.212347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 03:56:54.212356 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.212365 | orchestrator | 2025-05-25 03:56:54.212374 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-25 03:56:54.212383 | orchestrator | Sunday 25 May 2025 03:52:53 +0000 (0:00:00.386) 0:02:15.356 ************ 2025-05-25 03:56:54.212392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-25 03:56:54.212401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-25 03:56:54.212410 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.212418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-25 03:56:54.212427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-25 03:56:54.212436 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.212445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-25 03:56:54.212459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-25 03:56:54.212468 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.212477 | orchestrator | 2025-05-25 03:56:54.212485 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-25 03:56:54.212494 | orchestrator | Sunday 25 May 2025 03:52:54 +0000 (0:00:00.545) 0:02:15.901 ************ 2025-05-25 03:56:54.212503 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.212511 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.212520 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.212529 | orchestrator | 2025-05-25 03:56:54.212537 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-25 03:56:54.212546 | orchestrator | Sunday 25 May 2025 03:52:55 +0000 (0:00:01.391) 0:02:17.293 ************ 2025-05-25 03:56:54.212558 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.212567 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.212576 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.212584 | orchestrator | 2025-05-25 03:56:54.212593 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-25 03:56:54.212602 | orchestrator | Sunday 25 May 2025 03:52:57 +0000 (0:00:01.758) 0:02:19.052 ************ 2025-05-25 03:56:54.212610 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.212619 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.212632 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.212641 | orchestrator | 2025-05-25 03:56:54.212650 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-25 03:56:54.212659 | orchestrator | Sunday 25 May 2025 03:52:57 +0000 (0:00:00.258) 0:02:19.310 ************ 2025-05-25 03:56:54.212668 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.212676 | orchestrator | 2025-05-25 03:56:54.212685 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-25 03:56:54.212693 | orchestrator | Sunday 25 May 2025 03:52:58 +0000 (0:00:00.831) 0:02:20.141 ************ 2025-05-25 03:56:54.212704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 03:56:54.212730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 03:56:54.212741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 03:56:54.212756 | orchestrator | 2025-05-25 03:56:54.212765 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-25 03:56:54.212773 | orchestrator | Sunday 25 May 2025 03:53:03 +0000 (0:00:04.382) 0:02:24.523 ************ 2025-05-25 03:56:54.212793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 03:56:54.212804 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.212814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 03:56:54.212828 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.213705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 03:56:54.213737 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.213747 | orchestrator | 2025-05-25 03:56:54.213756 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-25 03:56:54.213764 | orchestrator | Sunday 25 May 2025 03:53:04 +0000 (0:00:01.007) 0:02:25.530 ************ 2025-05-25 03:56:54.213774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-25 03:56:54.213795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-25 03:56:54.213805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-25 03:56:54.213815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-25 03:56:54.213824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-25 03:56:54.213834 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.213843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-25 03:56:54.213852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-25 03:56:54.213925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-25 03:56:54.213938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-25 03:56:54.213948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-25 03:56:54.213957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-25 03:56:54.213966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-25 03:56:54.213975 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.213984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-25 03:56:54.214115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-25 03:56:54.214134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-25 03:56:54.214142 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.214151 | orchestrator | 2025-05-25 03:56:54.214185 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-25 03:56:54.214195 | orchestrator | Sunday 25 May 2025 03:53:05 +0000 (0:00:01.396) 0:02:26.927 ************ 2025-05-25 03:56:54.214203 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.214212 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.214221 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.214229 | orchestrator | 2025-05-25 03:56:54.214238 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-25 03:56:54.214247 | orchestrator | Sunday 25 May 2025 03:53:07 +0000 (0:00:01.676) 0:02:28.603 ************ 2025-05-25 03:56:54.214255 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.214264 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.214273 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.214281 | orchestrator | 2025-05-25 03:56:54.214290 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-25 03:56:54.214298 | orchestrator | Sunday 25 May 2025 03:53:09 +0000 (0:00:02.077) 0:02:30.681 ************ 2025-05-25 03:56:54.214307 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.214316 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.214324 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.214333 | orchestrator | 2025-05-25 03:56:54.214341 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-25 03:56:54.214350 | orchestrator | Sunday 25 May 2025 03:53:09 +0000 (0:00:00.365) 0:02:31.046 ************ 2025-05-25 03:56:54.214358 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.214367 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.214376 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.214385 | orchestrator | 2025-05-25 03:56:54.214393 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-25 03:56:54.214402 | orchestrator | Sunday 25 May 2025 03:53:09 +0000 (0:00:00.295) 0:02:31.341 ************ 2025-05-25 03:56:54.214411 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.214419 | orchestrator | 2025-05-25 03:56:54.214428 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-25 03:56:54.214436 | orchestrator | Sunday 25 May 2025 03:53:11 +0000 (0:00:01.281) 0:02:32.623 ************ 2025-05-25 03:56:54.214581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 03:56:54.214607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 03:56:54.214617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 03:56:54.214626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 03:56:54.214635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 03:56:54.214647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 03:56:54.214712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 03:56:54.214731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 03:56:54.214740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 03:56:54.214748 | orchestrator | 2025-05-25 03:56:54.214756 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-25 03:56:54.214765 | orchestrator | Sunday 25 May 2025 03:53:15 +0000 (0:00:04.049) 0:02:36.672 ************ 2025-05-25 03:56:54.214773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-25 03:56:54.214786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 03:56:54.214845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 03:56:54.214863 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.214872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-25 03:56:54.214881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 03:56:54.214890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 03:56:54.214898 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.214911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-25 03:56:54.214968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 03:56:54.214986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 03:56:54.214994 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.215023 | orchestrator | 2025-05-25 03:56:54.215033 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-25 03:56:54.215060 | orchestrator | Sunday 25 May 2025 03:53:16 +0000 (0:00:00.762) 0:02:37.435 ************ 2025-05-25 03:56:54.215069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-25 03:56:54.215078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-25 03:56:54.215086 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.215095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-25 03:56:54.215103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-25 03:56:54.215111 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.215120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-25 03:56:54.215128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-25 03:56:54.215136 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.215153 | orchestrator | 2025-05-25 03:56:54.215161 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-25 03:56:54.215168 | orchestrator | Sunday 25 May 2025 03:53:17 +0000 (0:00:01.121) 0:02:38.557 ************ 2025-05-25 03:56:54.215176 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.215184 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.215192 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.215199 | orchestrator | 2025-05-25 03:56:54.215207 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-25 03:56:54.215215 | orchestrator | Sunday 25 May 2025 03:53:18 +0000 (0:00:01.474) 0:02:40.031 ************ 2025-05-25 03:56:54.215229 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.215237 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.215244 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.215252 | orchestrator | 2025-05-25 03:56:54.215266 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-25 03:56:54.215278 | orchestrator | Sunday 25 May 2025 03:53:20 +0000 (0:00:02.127) 0:02:42.159 ************ 2025-05-25 03:56:54.215290 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.215303 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.215315 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.215328 | orchestrator | 2025-05-25 03:56:54.215341 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-25 03:56:54.215359 | orchestrator | Sunday 25 May 2025 03:53:21 +0000 (0:00:00.349) 0:02:42.509 ************ 2025-05-25 03:56:54.215372 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.215386 | orchestrator | 2025-05-25 03:56:54.215399 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-25 03:56:54.215410 | orchestrator | Sunday 25 May 2025 03:53:22 +0000 (0:00:01.337) 0:02:43.846 ************ 2025-05-25 03:56:54.215483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 03:56:54.215496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.215506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 03:56:54.215514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.215587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 03:56:54.215599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.215607 | orchestrator | 2025-05-25 03:56:54.215615 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-25 03:56:54.215623 | orchestrator | Sunday 25 May 2025 03:53:26 +0000 (0:00:04.301) 0:02:48.147 ************ 2025-05-25 03:56:54.215632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-25 03:56:54.215640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.215654 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.215662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-25 03:56:54.215723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.215735 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.215743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-25 03:56:54.215752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.215760 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.215768 | orchestrator | 2025-05-25 03:56:54.215776 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-25 03:56:54.215784 | orchestrator | Sunday 25 May 2025 03:53:27 +0000 (0:00:00.654) 0:02:48.802 ************ 2025-05-25 03:56:54.215793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-25 03:56:54.215810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-25 03:56:54.215819 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.215827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-25 03:56:54.215840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-25 03:56:54.215854 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.215867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-25 03:56:54.215881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-25 03:56:54.215895 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.215931 | orchestrator | 2025-05-25 03:56:54.215944 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-25 03:56:54.215956 | orchestrator | Sunday 25 May 2025 03:53:28 +0000 (0:00:01.309) 0:02:50.111 ************ 2025-05-25 03:56:54.215969 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.215981 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.215993 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.216026 | orchestrator | 2025-05-25 03:56:54.216039 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-25 03:56:54.216058 | orchestrator | Sunday 25 May 2025 03:53:29 +0000 (0:00:01.230) 0:02:51.342 ************ 2025-05-25 03:56:54.216070 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.216082 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.216095 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.216123 | orchestrator | 2025-05-25 03:56:54.216135 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-25 03:56:54.216146 | orchestrator | Sunday 25 May 2025 03:53:31 +0000 (0:00:01.877) 0:02:53.219 ************ 2025-05-25 03:56:54.216325 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.216350 | orchestrator | 2025-05-25 03:56:54.216359 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-25 03:56:54.216367 | orchestrator | Sunday 25 May 2025 03:53:32 +0000 (0:00:01.026) 0:02:54.245 ************ 2025-05-25 03:56:54.216376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-25 03:56:54.216386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.216405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.216414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.216422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-25 03:56:54.216498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.216510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.216519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.216533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-25 03:56:54.216542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.216550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.216615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.216627 | orchestrator | 2025-05-25 03:56:54.216635 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-25 03:56:54.216643 | orchestrator | Sunday 25 May 2025 03:53:36 +0000 (0:00:03.421) 0:02:57.667 ************ 2025-05-25 03:56:54.216652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-25 03:56:54.216666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.216680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.216694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.216708 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.216728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-25 03:56:54.216817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.216837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.216860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.216874 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.216915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-25 03:56:54.216930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.216944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.217101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.217122 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.217140 | orchestrator | 2025-05-25 03:56:54.217179 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-25 03:56:54.217191 | orchestrator | Sunday 25 May 2025 03:53:37 +0000 (0:00:00.783) 0:02:58.451 ************ 2025-05-25 03:56:54.217213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-25 03:56:54.217224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-25 03:56:54.217236 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.217247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-25 03:56:54.217259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-25 03:56:54.217271 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.217282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-25 03:56:54.217294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-25 03:56:54.217306 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.217317 | orchestrator | 2025-05-25 03:56:54.217328 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-25 03:56:54.217340 | orchestrator | Sunday 25 May 2025 03:53:38 +0000 (0:00:00.986) 0:02:59.437 ************ 2025-05-25 03:56:54.217352 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.217364 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.217375 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.217387 | orchestrator | 2025-05-25 03:56:54.217398 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-25 03:56:54.217410 | orchestrator | Sunday 25 May 2025 03:53:39 +0000 (0:00:01.533) 0:03:00.971 ************ 2025-05-25 03:56:54.217422 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.217434 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.217446 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.217458 | orchestrator | 2025-05-25 03:56:54.217470 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-25 03:56:54.217481 | orchestrator | Sunday 25 May 2025 03:53:41 +0000 (0:00:02.189) 0:03:03.161 ************ 2025-05-25 03:56:54.217492 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.217504 | orchestrator | 2025-05-25 03:56:54.217516 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-25 03:56:54.217527 | orchestrator | Sunday 25 May 2025 03:53:42 +0000 (0:00:01.081) 0:03:04.242 ************ 2025-05-25 03:56:54.217539 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 03:56:54.217552 | orchestrator | 2025-05-25 03:56:54.217563 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-25 03:56:54.217575 | orchestrator | Sunday 25 May 2025 03:53:45 +0000 (0:00:03.091) 0:03:07.334 ************ 2025-05-25 03:56:54.217677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 03:56:54.217703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-25 03:56:54.217710 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.217718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 03:56:54.217725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-25 03:56:54.217737 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.217798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 03:56:54.217810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-25 03:56:54.217817 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.217824 | orchestrator | 2025-05-25 03:56:54.217830 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-25 03:56:54.217837 | orchestrator | Sunday 25 May 2025 03:53:48 +0000 (0:00:02.744) 0:03:10.079 ************ 2025-05-25 03:56:54.217852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 03:56:54.217910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-25 03:56:54.217925 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.217945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 03:56:54.217958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-25 03:56:54.217978 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.218106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 03:56:54.218121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-25 03:56:54.218129 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.218135 | orchestrator | 2025-05-25 03:56:54.218142 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-25 03:56:54.218149 | orchestrator | Sunday 25 May 2025 03:53:51 +0000 (0:00:02.481) 0:03:12.561 ************ 2025-05-25 03:56:54.218156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-25 03:56:54.218163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-25 03:56:54.218178 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.218185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-25 03:56:54.218196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-25 03:56:54.218203 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.218254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-25 03:56:54.218265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-25 03:56:54.218272 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.218279 | orchestrator | 2025-05-25 03:56:54.218285 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-25 03:56:54.218292 | orchestrator | Sunday 25 May 2025 03:53:54 +0000 (0:00:03.016) 0:03:15.577 ************ 2025-05-25 03:56:54.218299 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.218305 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.218312 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.218319 | orchestrator | 2025-05-25 03:56:54.218325 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-25 03:56:54.218332 | orchestrator | Sunday 25 May 2025 03:53:56 +0000 (0:00:02.123) 0:03:17.701 ************ 2025-05-25 03:56:54.218338 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.218345 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.218352 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.218358 | orchestrator | 2025-05-25 03:56:54.218365 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-25 03:56:54.218372 | orchestrator | Sunday 25 May 2025 03:53:57 +0000 (0:00:01.464) 0:03:19.165 ************ 2025-05-25 03:56:54.218378 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.218385 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.218391 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.218398 | orchestrator | 2025-05-25 03:56:54.218405 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-25 03:56:54.218416 | orchestrator | Sunday 25 May 2025 03:53:58 +0000 (0:00:00.317) 0:03:19.483 ************ 2025-05-25 03:56:54.218423 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.218430 | orchestrator | 2025-05-25 03:56:54.218436 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-25 03:56:54.218443 | orchestrator | Sunday 25 May 2025 03:53:59 +0000 (0:00:01.094) 0:03:20.578 ************ 2025-05-25 03:56:54.218450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-25 03:56:54.218458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-25 03:56:54.218509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-25 03:56:54.218518 | orchestrator | 2025-05-25 03:56:54.218525 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-25 03:56:54.218532 | orchestrator | Sunday 25 May 2025 03:54:00 +0000 (0:00:01.715) 0:03:22.294 ************ 2025-05-25 03:56:54.218539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-25 03:56:54.218546 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.218553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-25 03:56:54.218586 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.218636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-25 03:56:54.218651 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.218658 | orchestrator | 2025-05-25 03:56:54.218666 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-25 03:56:54.218673 | orchestrator | Sunday 25 May 2025 03:54:01 +0000 (0:00:00.385) 0:03:22.679 ************ 2025-05-25 03:56:54.218680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-25 03:56:54.218692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-25 03:56:54.218699 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.218706 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.218767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-25 03:56:54.218778 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.218784 | orchestrator | 2025-05-25 03:56:54.218791 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-25 03:56:54.218798 | orchestrator | Sunday 25 May 2025 03:54:01 +0000 (0:00:00.585) 0:03:23.264 ************ 2025-05-25 03:56:54.218804 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.218825 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.218832 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.218838 | orchestrator | 2025-05-25 03:56:54.218845 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-25 03:56:54.218851 | orchestrator | Sunday 25 May 2025 03:54:02 +0000 (0:00:00.733) 0:03:23.998 ************ 2025-05-25 03:56:54.218858 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.218865 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.218871 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.218878 | orchestrator | 2025-05-25 03:56:54.218884 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-25 03:56:54.218891 | orchestrator | Sunday 25 May 2025 03:54:03 +0000 (0:00:01.212) 0:03:25.210 ************ 2025-05-25 03:56:54.218906 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.218917 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.218927 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.218938 | orchestrator | 2025-05-25 03:56:54.218949 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-25 03:56:54.218959 | orchestrator | Sunday 25 May 2025 03:54:04 +0000 (0:00:00.310) 0:03:25.520 ************ 2025-05-25 03:56:54.218971 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.218982 | orchestrator | 2025-05-25 03:56:54.218993 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-25 03:56:54.219021 | orchestrator | Sunday 25 May 2025 03:54:05 +0000 (0:00:01.403) 0:03:26.924 ************ 2025-05-25 03:56:54.219035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 03:56:54.219046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-25 03:56:54.219154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.219170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.219177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 03:56:54.219230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 03:56:54.219258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 03:56:54.219372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.219390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-25 03:56:54.219412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-25 03:56:54.219423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-25 03:56:54.219510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.219525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.219532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 03:56:54.219550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 03:56:54.219615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 03:56:54.219623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.219637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-25 03:56:54.219728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-25 03:56:54.219742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-25 03:56:54.219753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.219824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.219831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 03:56:54.219845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 03:56:54.219922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.219939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-25 03:56:54.219963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-25 03:56:54.219975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.219996 | orchestrator | 2025-05-25 03:56:54.220025 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-25 03:56:54.220033 | orchestrator | Sunday 25 May 2025 03:54:09 +0000 (0:00:04.120) 0:03:31.044 ************ 2025-05-25 03:56:54.220098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 03:56:54.220109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-25 03:56:54.220160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.220220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 03:56:54.220227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.220234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 03:56:54.220315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-25 03:56:54.220341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 03:56:54.220351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.220420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.220435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-25 03:56:54.220442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.220454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-25 03:56:54.220506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 03:56:54.220516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 03:56:54.220551 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.220562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 03:56:54.220625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.220653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-25 03:56:54.220661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-25 03:56:54.220737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.220755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-25 03:56:54.220777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.220804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 03:56:54.220870 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.220877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 03:56:54.220897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-25 03:56:54.220904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-25 03:56:54.220967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-25 03:56:54.220980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.220991 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.221030 | orchestrator | 2025-05-25 03:56:54.221051 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-25 03:56:54.221058 | orchestrator | Sunday 25 May 2025 03:54:11 +0000 (0:00:01.557) 0:03:32.601 ************ 2025-05-25 03:56:54.221065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-25 03:56:54.221072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-25 03:56:54.221079 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.221086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-25 03:56:54.221093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-25 03:56:54.221100 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.221107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-25 03:56:54.221113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-25 03:56:54.221120 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.221127 | orchestrator | 2025-05-25 03:56:54.221134 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-25 03:56:54.221140 | orchestrator | Sunday 25 May 2025 03:54:13 +0000 (0:00:01.990) 0:03:34.592 ************ 2025-05-25 03:56:54.221147 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.221154 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.221160 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.221167 | orchestrator | 2025-05-25 03:56:54.221174 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-25 03:56:54.221180 | orchestrator | Sunday 25 May 2025 03:54:14 +0000 (0:00:01.274) 0:03:35.866 ************ 2025-05-25 03:56:54.221187 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.221194 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.221200 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.221207 | orchestrator | 2025-05-25 03:56:54.221214 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-25 03:56:54.221221 | orchestrator | Sunday 25 May 2025 03:54:16 +0000 (0:00:01.983) 0:03:37.850 ************ 2025-05-25 03:56:54.221227 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.221238 | orchestrator | 2025-05-25 03:56:54.221245 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-25 03:56:54.221252 | orchestrator | Sunday 25 May 2025 03:54:17 +0000 (0:00:01.206) 0:03:39.057 ************ 2025-05-25 03:56:54.221283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.221297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.221305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.221312 | orchestrator | 2025-05-25 03:56:54.221319 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-25 03:56:54.221325 | orchestrator | Sunday 25 May 2025 03:54:21 +0000 (0:00:03.583) 0:03:42.640 ************ 2025-05-25 03:56:54.221332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.221340 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.221369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.221383 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.221391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.221400 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.221407 | orchestrator | 2025-05-25 03:56:54.221414 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-25 03:56:54.221422 | orchestrator | Sunday 25 May 2025 03:54:21 +0000 (0:00:00.523) 0:03:43.164 ************ 2025-05-25 03:56:54.221430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-25 03:56:54.221437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-25 03:56:54.221445 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.221453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-25 03:56:54.221461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-25 03:56:54.221469 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.221476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-25 03:56:54.221484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-25 03:56:54.221496 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.221507 | orchestrator | 2025-05-25 03:56:54.221522 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-25 03:56:54.221537 | orchestrator | Sunday 25 May 2025 03:54:22 +0000 (0:00:00.783) 0:03:43.947 ************ 2025-05-25 03:56:54.221548 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.221558 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.221569 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.221579 | orchestrator | 2025-05-25 03:56:54.221590 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-25 03:56:54.221601 | orchestrator | Sunday 25 May 2025 03:54:24 +0000 (0:00:01.716) 0:03:45.663 ************ 2025-05-25 03:56:54.221612 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.221624 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.221635 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.221646 | orchestrator | 2025-05-25 03:56:54.221662 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-25 03:56:54.221669 | orchestrator | Sunday 25 May 2025 03:54:26 +0000 (0:00:02.105) 0:03:47.769 ************ 2025-05-25 03:56:54.221680 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.221686 | orchestrator | 2025-05-25 03:56:54.221693 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-25 03:56:54.221699 | orchestrator | Sunday 25 May 2025 03:54:27 +0000 (0:00:01.244) 0:03:49.013 ************ 2025-05-25 03:56:54.221794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.221806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.221813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.221821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.221859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.221868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.221876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.221883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.221890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.221897 | orchestrator | 2025-05-25 03:56:54.221904 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-25 03:56:54.221921 | orchestrator | Sunday 25 May 2025 03:54:31 +0000 (0:00:04.330) 0:03:53.344 ************ 2025-05-25 03:56:54.221975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.221990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.222091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.222114 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.222127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.222139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.222162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.222172 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.222218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.222232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.222243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.222254 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.222261 | orchestrator | 2025-05-25 03:56:54.222267 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-25 03:56:54.222274 | orchestrator | Sunday 25 May 2025 03:54:32 +0000 (0:00:00.962) 0:03:54.307 ************ 2025-05-25 03:56:54.222280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-25 03:56:54.222293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-25 03:56:54.222300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-25 03:56:54.222306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-25 03:56:54.222313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-25 03:56:54.222319 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.222329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-25 03:56:54.222355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-25 03:56:54.222363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-25 03:56:54.222369 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.222376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-25 03:56:54.222382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-25 03:56:54.222389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-25 03:56:54.222395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-25 03:56:54.222401 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.222407 | orchestrator | 2025-05-25 03:56:54.222414 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-25 03:56:54.222420 | orchestrator | Sunday 25 May 2025 03:54:33 +0000 (0:00:00.828) 0:03:55.136 ************ 2025-05-25 03:56:54.222426 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.222432 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.222438 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.222444 | orchestrator | 2025-05-25 03:56:54.222450 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-25 03:56:54.222457 | orchestrator | Sunday 25 May 2025 03:54:35 +0000 (0:00:01.584) 0:03:56.721 ************ 2025-05-25 03:56:54.222463 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.222469 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.222475 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.222481 | orchestrator | 2025-05-25 03:56:54.222487 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-25 03:56:54.222498 | orchestrator | Sunday 25 May 2025 03:54:37 +0000 (0:00:02.070) 0:03:58.791 ************ 2025-05-25 03:56:54.222504 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.222510 | orchestrator | 2025-05-25 03:56:54.222516 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-25 03:56:54.222522 | orchestrator | Sunday 25 May 2025 03:54:38 +0000 (0:00:01.534) 0:04:00.326 ************ 2025-05-25 03:56:54.222528 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-25 03:56:54.222535 | orchestrator | 2025-05-25 03:56:54.222541 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-25 03:56:54.222547 | orchestrator | Sunday 25 May 2025 03:54:39 +0000 (0:00:01.063) 0:04:01.389 ************ 2025-05-25 03:56:54.222554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-25 03:56:54.222560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-25 03:56:54.222570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-25 03:56:54.222577 | orchestrator | 2025-05-25 03:56:54.222599 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-25 03:56:54.222607 | orchestrator | Sunday 25 May 2025 03:54:44 +0000 (0:00:04.091) 0:04:05.481 ************ 2025-05-25 03:56:54.222613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 03:56:54.222620 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.222626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 03:56:54.222633 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.222643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 03:56:54.222650 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.222656 | orchestrator | 2025-05-25 03:56:54.222662 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-25 03:56:54.222668 | orchestrator | Sunday 25 May 2025 03:54:45 +0000 (0:00:01.287) 0:04:06.769 ************ 2025-05-25 03:56:54.222674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-25 03:56:54.222681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-25 03:56:54.222688 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.222694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-25 03:56:54.222701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-25 03:56:54.222707 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.222713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-25 03:56:54.222720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-25 03:56:54.222726 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.222732 | orchestrator | 2025-05-25 03:56:54.222739 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-25 03:56:54.222745 | orchestrator | Sunday 25 May 2025 03:54:47 +0000 (0:00:01.872) 0:04:08.641 ************ 2025-05-25 03:56:54.222751 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.222757 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.222763 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.222769 | orchestrator | 2025-05-25 03:56:54.222775 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-25 03:56:54.222781 | orchestrator | Sunday 25 May 2025 03:54:49 +0000 (0:00:02.291) 0:04:10.933 ************ 2025-05-25 03:56:54.222788 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.222794 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.222800 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.222806 | orchestrator | 2025-05-25 03:56:54.222828 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-25 03:56:54.222835 | orchestrator | Sunday 25 May 2025 03:54:52 +0000 (0:00:02.912) 0:04:13.845 ************ 2025-05-25 03:56:54.222842 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-25 03:56:54.222848 | orchestrator | 2025-05-25 03:56:54.222859 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-25 03:56:54.222865 | orchestrator | Sunday 25 May 2025 03:54:53 +0000 (0:00:00.940) 0:04:14.786 ************ 2025-05-25 03:56:54.222871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 03:56:54.222878 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.222884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 03:56:54.222890 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.222897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 03:56:54.222903 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.222912 | orchestrator | 2025-05-25 03:56:54.222926 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-25 03:56:54.222942 | orchestrator | Sunday 25 May 2025 03:54:54 +0000 (0:00:01.551) 0:04:16.338 ************ 2025-05-25 03:56:54.222953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 03:56:54.222964 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.222997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 03:56:54.223026 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.223037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 03:56:54.223049 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.223055 | orchestrator | 2025-05-25 03:56:54.223086 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-25 03:56:54.223093 | orchestrator | Sunday 25 May 2025 03:54:56 +0000 (0:00:01.570) 0:04:17.909 ************ 2025-05-25 03:56:54.223100 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.223106 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.223155 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.223163 | orchestrator | 2025-05-25 03:56:54.223169 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-25 03:56:54.223175 | orchestrator | Sunday 25 May 2025 03:54:57 +0000 (0:00:01.296) 0:04:19.205 ************ 2025-05-25 03:56:54.223181 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:56:54.223188 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:56:54.223194 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:56:54.223200 | orchestrator | 2025-05-25 03:56:54.223206 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-25 03:56:54.223212 | orchestrator | Sunday 25 May 2025 03:55:00 +0000 (0:00:02.343) 0:04:21.549 ************ 2025-05-25 03:56:54.223218 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:56:54.223225 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:56:54.223231 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:56:54.223237 | orchestrator | 2025-05-25 03:56:54.223243 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-25 03:56:54.223249 | orchestrator | Sunday 25 May 2025 03:55:03 +0000 (0:00:03.018) 0:04:24.568 ************ 2025-05-25 03:56:54.223255 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-25 03:56:54.223262 | orchestrator | 2025-05-25 03:56:54.223268 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-25 03:56:54.223274 | orchestrator | Sunday 25 May 2025 03:55:04 +0000 (0:00:01.221) 0:04:25.789 ************ 2025-05-25 03:56:54.223281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-25 03:56:54.223287 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.223294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-25 03:56:54.223300 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.223307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-25 03:56:54.223313 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.223319 | orchestrator | 2025-05-25 03:56:54.223335 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-25 03:56:54.223342 | orchestrator | Sunday 25 May 2025 03:55:05 +0000 (0:00:01.063) 0:04:26.853 ************ 2025-05-25 03:56:54.223348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-25 03:56:54.223358 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.223384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-25 03:56:54.223392 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.223399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-25 03:56:54.223406 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.223412 | orchestrator | 2025-05-25 03:56:54.223418 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-25 03:56:54.223424 | orchestrator | Sunday 25 May 2025 03:55:06 +0000 (0:00:01.268) 0:04:28.121 ************ 2025-05-25 03:56:54.223430 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.223436 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.223442 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.223449 | orchestrator | 2025-05-25 03:56:54.223455 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-25 03:56:54.223461 | orchestrator | Sunday 25 May 2025 03:55:08 +0000 (0:00:01.800) 0:04:29.921 ************ 2025-05-25 03:56:54.223467 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:56:54.223473 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:56:54.223480 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:56:54.223486 | orchestrator | 2025-05-25 03:56:54.223492 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-25 03:56:54.223498 | orchestrator | Sunday 25 May 2025 03:55:10 +0000 (0:00:02.156) 0:04:32.078 ************ 2025-05-25 03:56:54.223504 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:56:54.223510 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:56:54.223516 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:56:54.223522 | orchestrator | 2025-05-25 03:56:54.223528 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-25 03:56:54.223535 | orchestrator | Sunday 25 May 2025 03:55:13 +0000 (0:00:03.108) 0:04:35.186 ************ 2025-05-25 03:56:54.223541 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.223547 | orchestrator | 2025-05-25 03:56:54.223553 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-25 03:56:54.223559 | orchestrator | Sunday 25 May 2025 03:55:15 +0000 (0:00:01.402) 0:04:36.589 ************ 2025-05-25 03:56:54.223566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.223578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 03:56:54.223588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.223611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.223619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.223626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.223637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 03:56:54.223644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.223654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.223677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.223684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.223691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 03:56:54.223702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.223708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.223715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.223721 | orchestrator | 2025-05-25 03:56:54.223728 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-25 03:56:54.223737 | orchestrator | Sunday 25 May 2025 03:55:18 +0000 (0:00:03.687) 0:04:40.276 ************ 2025-05-25 03:56:54.223759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.223767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 03:56:54.223774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.223784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.223791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.223798 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.223807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.223830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 03:56:54.223837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.223844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.223854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.223861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.223867 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.223874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 03:56:54.223898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.223906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 03:56:54.223918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 03:56:54.223936 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.223946 | orchestrator | 2025-05-25 03:56:54.223957 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-25 03:56:54.223967 | orchestrator | Sunday 25 May 2025 03:55:19 +0000 (0:00:00.680) 0:04:40.957 ************ 2025-05-25 03:56:54.223978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-25 03:56:54.223990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-25 03:56:54.224050 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.224061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-25 03:56:54.224068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-25 03:56:54.224074 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.224080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-25 03:56:54.224086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-25 03:56:54.224093 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.224099 | orchestrator | 2025-05-25 03:56:54.224105 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-25 03:56:54.224111 | orchestrator | Sunday 25 May 2025 03:55:20 +0000 (0:00:00.865) 0:04:41.822 ************ 2025-05-25 03:56:54.224117 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.224123 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.224129 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.224135 | orchestrator | 2025-05-25 03:56:54.224141 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-25 03:56:54.224148 | orchestrator | Sunday 25 May 2025 03:55:22 +0000 (0:00:01.730) 0:04:43.553 ************ 2025-05-25 03:56:54.224154 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.224160 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.224166 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.224172 | orchestrator | 2025-05-25 03:56:54.224178 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-25 03:56:54.224184 | orchestrator | Sunday 25 May 2025 03:55:24 +0000 (0:00:02.055) 0:04:45.608 ************ 2025-05-25 03:56:54.224190 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.224196 | orchestrator | 2025-05-25 03:56:54.224206 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-25 03:56:54.224212 | orchestrator | Sunday 25 May 2025 03:55:25 +0000 (0:00:01.365) 0:04:46.973 ************ 2025-05-25 03:56:54.224243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 03:56:54.224257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 03:56:54.224264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 03:56:54.224272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 03:56:54.224301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 03:56:54.224314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 03:56:54.224321 | orchestrator | 2025-05-25 03:56:54.224327 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-25 03:56:54.224334 | orchestrator | Sunday 25 May 2025 03:55:30 +0000 (0:00:05.193) 0:04:52.167 ************ 2025-05-25 03:56:54.224340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-25 03:56:54.224347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-25 03:56:54.224354 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.224379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-25 03:56:54.224392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-25 03:56:54.224399 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.224405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-25 03:56:54.224412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-25 03:56:54.224419 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.224425 | orchestrator | 2025-05-25 03:56:54.224431 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-25 03:56:54.224437 | orchestrator | Sunday 25 May 2025 03:55:31 +0000 (0:00:01.032) 0:04:53.199 ************ 2025-05-25 03:56:54.224444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-25 03:56:54.224458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-25 03:56:54.224484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-25 03:56:54.224492 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.224498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-25 03:56:54.224504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-25 03:56:54.224511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-25 03:56:54.224517 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.224523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-25 03:56:54.224530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-25 03:56:54.224536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-25 03:56:54.224542 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.224549 | orchestrator | 2025-05-25 03:56:54.224555 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-25 03:56:54.224561 | orchestrator | Sunday 25 May 2025 03:55:32 +0000 (0:00:00.884) 0:04:54.084 ************ 2025-05-25 03:56:54.224567 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.224574 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.224579 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.224584 | orchestrator | 2025-05-25 03:56:54.224590 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-25 03:56:54.224595 | orchestrator | Sunday 25 May 2025 03:55:33 +0000 (0:00:00.435) 0:04:54.519 ************ 2025-05-25 03:56:54.224600 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.224606 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.224611 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.224616 | orchestrator | 2025-05-25 03:56:54.224622 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-25 03:56:54.224627 | orchestrator | Sunday 25 May 2025 03:55:34 +0000 (0:00:01.378) 0:04:55.897 ************ 2025-05-25 03:56:54.224632 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.224638 | orchestrator | 2025-05-25 03:56:54.224643 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-25 03:56:54.224648 | orchestrator | Sunday 25 May 2025 03:55:36 +0000 (0:00:01.655) 0:04:57.553 ************ 2025-05-25 03:56:54.224654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-25 03:56:54.224667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 03:56:54.224687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.224693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.224699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 03:56:54.224705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-25 03:56:54.224711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 03:56:54.224723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.224741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-25 03:56:54.224778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.224789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 03:56:54.224799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 03:56:54.224808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.224817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.224827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 03:56:54.224844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-25 03:56:54.224854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 03:56:54.224860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.224866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.224871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 03:56:54.224877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-25 03:56:54.224890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 03:56:54.224900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.224907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.224916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 03:56:54.224929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-25 03:56:54.224947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 03:56:54.224956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.224980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.224991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 03:56:54.224997 | orchestrator | 2025-05-25 03:56:54.225021 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-25 03:56:54.225028 | orchestrator | Sunday 25 May 2025 03:55:40 +0000 (0:00:04.091) 0:05:01.644 ************ 2025-05-25 03:56:54.225033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 03:56:54.225039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 03:56:54.225050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.225056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.225062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 03:56:54.225075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 03:56:54.225082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 03:56:54.225087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.225097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.225103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 03:56:54.225108 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.225114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 03:56:54.225126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 03:56:54.225132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.225137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.225143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 03:56:54.225153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 03:56:54.225158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 03:56:54.225167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 03:56:54.225176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.225182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 03:56:54.225187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.225199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.225204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 03:56:54.225210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.225216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 03:56:54.225231 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.225244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 03:56:54.225250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 03:56:54.225259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.225265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 03:56:54.225271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 03:56:54.225276 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.225282 | orchestrator | 2025-05-25 03:56:54.225287 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-25 03:56:54.225293 | orchestrator | Sunday 25 May 2025 03:55:41 +0000 (0:00:01.469) 0:05:03.114 ************ 2025-05-25 03:56:54.225299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-25 03:56:54.225305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-25 03:56:54.225313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-25 03:56:54.225322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-25 03:56:54.225328 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.225334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-25 03:56:54.225339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-25 03:56:54.225349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-25 03:56:54.225354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-25 03:56:54.225360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-25 03:56:54.225366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-25 03:56:54.225371 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.225377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-25 03:56:54.225383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-25 03:56:54.225388 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.225393 | orchestrator | 2025-05-25 03:56:54.225399 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-25 03:56:54.225404 | orchestrator | Sunday 25 May 2025 03:55:42 +0000 (0:00:00.994) 0:05:04.109 ************ 2025-05-25 03:56:54.225410 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.225415 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.225421 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.225426 | orchestrator | 2025-05-25 03:56:54.225431 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-25 03:56:54.225437 | orchestrator | Sunday 25 May 2025 03:55:43 +0000 (0:00:00.450) 0:05:04.560 ************ 2025-05-25 03:56:54.225442 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.225447 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.225455 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.225461 | orchestrator | 2025-05-25 03:56:54.225466 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-25 03:56:54.225472 | orchestrator | Sunday 25 May 2025 03:55:44 +0000 (0:00:01.627) 0:05:06.187 ************ 2025-05-25 03:56:54.225477 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.225482 | orchestrator | 2025-05-25 03:56:54.225488 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-25 03:56:54.225493 | orchestrator | Sunday 25 May 2025 03:55:46 +0000 (0:00:01.712) 0:05:07.899 ************ 2025-05-25 03:56:54.225504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 03:56:54.225515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 03:56:54.225521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 03:56:54.225527 | orchestrator | 2025-05-25 03:56:54.225532 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-25 03:56:54.225538 | orchestrator | Sunday 25 May 2025 03:55:48 +0000 (0:00:02.411) 0:05:10.310 ************ 2025-05-25 03:56:54.225543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-25 03:56:54.225549 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.225561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-25 03:56:54.225573 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.225578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-25 03:56:54.225584 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.225589 | orchestrator | 2025-05-25 03:56:54.225595 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-25 03:56:54.225600 | orchestrator | Sunday 25 May 2025 03:55:49 +0000 (0:00:00.396) 0:05:10.707 ************ 2025-05-25 03:56:54.225606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-25 03:56:54.225612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-25 03:56:54.225617 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.225623 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.225628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-25 03:56:54.225633 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.225639 | orchestrator | 2025-05-25 03:56:54.225644 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-25 03:56:54.225649 | orchestrator | Sunday 25 May 2025 03:55:50 +0000 (0:00:00.967) 0:05:11.674 ************ 2025-05-25 03:56:54.225655 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.225660 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.225666 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.225671 | orchestrator | 2025-05-25 03:56:54.225676 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-25 03:56:54.225682 | orchestrator | Sunday 25 May 2025 03:55:50 +0000 (0:00:00.421) 0:05:12.096 ************ 2025-05-25 03:56:54.225687 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.225693 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.225698 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.225704 | orchestrator | 2025-05-25 03:56:54.225709 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-25 03:56:54.225714 | orchestrator | Sunday 25 May 2025 03:55:51 +0000 (0:00:01.287) 0:05:13.384 ************ 2025-05-25 03:56:54.225720 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:56:54.225729 | orchestrator | 2025-05-25 03:56:54.225734 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-25 03:56:54.225739 | orchestrator | Sunday 25 May 2025 03:55:53 +0000 (0:00:01.791) 0:05:15.175 ************ 2025-05-25 03:56:54.225748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.225757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.225763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.225769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.225779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.225790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-25 03:56:54.225796 | orchestrator | 2025-05-25 03:56:54.225802 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-25 03:56:54.225807 | orchestrator | Sunday 25 May 2025 03:55:59 +0000 (0:00:06.096) 0:05:21.271 ************ 2025-05-25 03:56:54.225813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.225819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.225824 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.225834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.225845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.225851 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.225857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.225863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-25 03:56:54.225868 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.225874 | orchestrator | 2025-05-25 03:56:54.225879 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-25 03:56:54.225943 | orchestrator | Sunday 25 May 2025 03:56:00 +0000 (0:00:00.665) 0:05:21.937 ************ 2025-05-25 03:56:54.225956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-25 03:56:54.225966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-25 03:56:54.225976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-25 03:56:54.225986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-25 03:56:54.225996 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.226044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-25 03:56:54.226052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-25 03:56:54.226058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-25 03:56:54.226067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-25 03:56:54.226073 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.226083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-25 03:56:54.226089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-25 03:56:54.226095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-25 03:56:54.226101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-25 03:56:54.226106 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.226112 | orchestrator | 2025-05-25 03:56:54.226117 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-25 03:56:54.226123 | orchestrator | Sunday 25 May 2025 03:56:02 +0000 (0:00:01.591) 0:05:23.529 ************ 2025-05-25 03:56:54.226128 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.226134 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.226139 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.226144 | orchestrator | 2025-05-25 03:56:54.226150 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-25 03:56:54.226155 | orchestrator | Sunday 25 May 2025 03:56:03 +0000 (0:00:01.296) 0:05:24.826 ************ 2025-05-25 03:56:54.226165 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.226171 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.226176 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.226181 | orchestrator | 2025-05-25 03:56:54.226187 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-25 03:56:54.226192 | orchestrator | Sunday 25 May 2025 03:56:05 +0000 (0:00:02.138) 0:05:26.965 ************ 2025-05-25 03:56:54.226197 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.226203 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.226208 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.226213 | orchestrator | 2025-05-25 03:56:54.226219 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-25 03:56:54.226224 | orchestrator | Sunday 25 May 2025 03:56:05 +0000 (0:00:00.315) 0:05:27.280 ************ 2025-05-25 03:56:54.226230 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.226235 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.226240 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.226246 | orchestrator | 2025-05-25 03:56:54.226251 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-25 03:56:54.226257 | orchestrator | Sunday 25 May 2025 03:56:06 +0000 (0:00:00.309) 0:05:27.589 ************ 2025-05-25 03:56:54.226262 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.226267 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.226273 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.226278 | orchestrator | 2025-05-25 03:56:54.226283 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-25 03:56:54.226289 | orchestrator | Sunday 25 May 2025 03:56:06 +0000 (0:00:00.612) 0:05:28.201 ************ 2025-05-25 03:56:54.226294 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.226299 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.226305 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.226310 | orchestrator | 2025-05-25 03:56:54.226315 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-25 03:56:54.226321 | orchestrator | Sunday 25 May 2025 03:56:07 +0000 (0:00:00.307) 0:05:28.508 ************ 2025-05-25 03:56:54.226326 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.226331 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.226337 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.226342 | orchestrator | 2025-05-25 03:56:54.226347 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-25 03:56:54.226353 | orchestrator | Sunday 25 May 2025 03:56:07 +0000 (0:00:00.315) 0:05:28.824 ************ 2025-05-25 03:56:54.226358 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.226363 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.226369 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.226374 | orchestrator | 2025-05-25 03:56:54.226379 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-25 03:56:54.226385 | orchestrator | Sunday 25 May 2025 03:56:08 +0000 (0:00:00.823) 0:05:29.648 ************ 2025-05-25 03:56:54.226390 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:56:54.226395 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:56:54.226401 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:56:54.226406 | orchestrator | 2025-05-25 03:56:54.226412 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-25 03:56:54.226417 | orchestrator | Sunday 25 May 2025 03:56:08 +0000 (0:00:00.676) 0:05:30.325 ************ 2025-05-25 03:56:54.226422 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:56:54.226428 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:56:54.226433 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:56:54.226438 | orchestrator | 2025-05-25 03:56:54.226444 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-25 03:56:54.226452 | orchestrator | Sunday 25 May 2025 03:56:09 +0000 (0:00:00.320) 0:05:30.645 ************ 2025-05-25 03:56:54.226458 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:56:54.226463 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:56:54.226472 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:56:54.226478 | orchestrator | 2025-05-25 03:56:54.226483 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-25 03:56:54.226489 | orchestrator | Sunday 25 May 2025 03:56:10 +0000 (0:00:00.857) 0:05:31.502 ************ 2025-05-25 03:56:54.226494 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:56:54.226500 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:56:54.226508 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:56:54.226513 | orchestrator | 2025-05-25 03:56:54.226519 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-25 03:56:54.226524 | orchestrator | Sunday 25 May 2025 03:56:11 +0000 (0:00:01.232) 0:05:32.735 ************ 2025-05-25 03:56:54.226529 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:56:54.226534 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:56:54.226540 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:56:54.226545 | orchestrator | 2025-05-25 03:56:54.226551 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-25 03:56:54.226556 | orchestrator | Sunday 25 May 2025 03:56:12 +0000 (0:00:00.846) 0:05:33.582 ************ 2025-05-25 03:56:54.226561 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.226567 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.226572 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.226577 | orchestrator | 2025-05-25 03:56:54.226583 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-25 03:56:54.226588 | orchestrator | Sunday 25 May 2025 03:56:20 +0000 (0:00:08.146) 0:05:41.728 ************ 2025-05-25 03:56:54.226593 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:56:54.226599 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:56:54.226604 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:56:54.226609 | orchestrator | 2025-05-25 03:56:54.226615 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-25 03:56:54.226620 | orchestrator | Sunday 25 May 2025 03:56:21 +0000 (0:00:00.742) 0:05:42.471 ************ 2025-05-25 03:56:54.226625 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.226631 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.226636 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.226641 | orchestrator | 2025-05-25 03:56:54.226647 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-25 03:56:54.226652 | orchestrator | Sunday 25 May 2025 03:56:35 +0000 (0:00:14.613) 0:05:57.084 ************ 2025-05-25 03:56:54.226657 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:56:54.226663 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:56:54.226668 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:56:54.226673 | orchestrator | 2025-05-25 03:56:54.226679 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-25 03:56:54.226684 | orchestrator | Sunday 25 May 2025 03:56:36 +0000 (0:00:00.706) 0:05:57.790 ************ 2025-05-25 03:56:54.226689 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:56:54.226695 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:56:54.226700 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:56:54.226706 | orchestrator | 2025-05-25 03:56:54.226711 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-25 03:56:54.226716 | orchestrator | Sunday 25 May 2025 03:56:45 +0000 (0:00:09.546) 0:06:07.337 ************ 2025-05-25 03:56:54.226722 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.226727 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.226732 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.226738 | orchestrator | 2025-05-25 03:56:54.226743 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-25 03:56:54.226748 | orchestrator | Sunday 25 May 2025 03:56:46 +0000 (0:00:00.368) 0:06:07.705 ************ 2025-05-25 03:56:54.226754 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.226759 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.226764 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.226770 | orchestrator | 2025-05-25 03:56:54.226779 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-25 03:56:54.226784 | orchestrator | Sunday 25 May 2025 03:56:46 +0000 (0:00:00.663) 0:06:08.368 ************ 2025-05-25 03:56:54.226790 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.226795 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.226800 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.226806 | orchestrator | 2025-05-25 03:56:54.226811 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-25 03:56:54.226816 | orchestrator | Sunday 25 May 2025 03:56:47 +0000 (0:00:00.340) 0:06:08.709 ************ 2025-05-25 03:56:54.226822 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.226827 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.226832 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.226837 | orchestrator | 2025-05-25 03:56:54.226843 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-25 03:56:54.226848 | orchestrator | Sunday 25 May 2025 03:56:47 +0000 (0:00:00.340) 0:06:09.050 ************ 2025-05-25 03:56:54.226853 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.226859 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.226864 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.226869 | orchestrator | 2025-05-25 03:56:54.226875 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-25 03:56:54.226880 | orchestrator | Sunday 25 May 2025 03:56:47 +0000 (0:00:00.330) 0:06:09.380 ************ 2025-05-25 03:56:54.226886 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:56:54.226891 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:56:54.226896 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:56:54.226902 | orchestrator | 2025-05-25 03:56:54.226908 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-25 03:56:54.226917 | orchestrator | Sunday 25 May 2025 03:56:48 +0000 (0:00:00.671) 0:06:10.052 ************ 2025-05-25 03:56:54.226927 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:56:54.226936 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:56:54.226946 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:56:54.226956 | orchestrator | 2025-05-25 03:56:54.226966 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-25 03:56:54.226976 | orchestrator | Sunday 25 May 2025 03:56:49 +0000 (0:00:00.956) 0:06:11.009 ************ 2025-05-25 03:56:54.226984 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:56:54.226993 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:56:54.226999 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:56:54.227024 | orchestrator | 2025-05-25 03:56:54.227030 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:56:54.227036 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-25 03:56:54.227045 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-25 03:56:54.227051 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-25 03:56:54.227056 | orchestrator | 2025-05-25 03:56:54.227062 | orchestrator | 2025-05-25 03:56:54.227067 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:56:54.227072 | orchestrator | Sunday 25 May 2025 03:56:50 +0000 (0:00:00.813) 0:06:11.823 ************ 2025-05-25 03:56:54.227078 | orchestrator | =============================================================================== 2025-05-25 03:56:54.227083 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 14.61s 2025-05-25 03:56:54.227089 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.55s 2025-05-25 03:56:54.227095 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.15s 2025-05-25 03:56:54.227100 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.10s 2025-05-25 03:56:54.227110 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.19s 2025-05-25 03:56:54.227115 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.48s 2025-05-25 03:56:54.227120 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 4.43s 2025-05-25 03:56:54.227126 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.38s 2025-05-25 03:56:54.227131 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.35s 2025-05-25 03:56:54.227137 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.33s 2025-05-25 03:56:54.227142 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.31s 2025-05-25 03:56:54.227147 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.30s 2025-05-25 03:56:54.227153 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.29s 2025-05-25 03:56:54.227158 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 4.14s 2025-05-25 03:56:54.227163 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.12s 2025-05-25 03:56:54.227169 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.09s 2025-05-25 03:56:54.227174 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.09s 2025-05-25 03:56:54.227180 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.05s 2025-05-25 03:56:54.227185 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.94s 2025-05-25 03:56:54.227190 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 3.73s 2025-05-25 03:56:54.227196 | orchestrator | 2025-05-25 03:56:54 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:56:54.227201 | orchestrator | 2025-05-25 03:56:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:56:57.252528 | orchestrator | 2025-05-25 03:56:57 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:56:57.256422 | orchestrator | 2025-05-25 03:56:57 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:56:57.259098 | orchestrator | 2025-05-25 03:56:57 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:56:57.259245 | orchestrator | 2025-05-25 03:56:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:00.300202 | orchestrator | 2025-05-25 03:57:00 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:00.301131 | orchestrator | 2025-05-25 03:57:00 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:00.301912 | orchestrator | 2025-05-25 03:57:00 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:00.301948 | orchestrator | 2025-05-25 03:57:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:03.357073 | orchestrator | 2025-05-25 03:57:03 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:03.358551 | orchestrator | 2025-05-25 03:57:03 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:03.359500 | orchestrator | 2025-05-25 03:57:03 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:03.359753 | orchestrator | 2025-05-25 03:57:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:06.409337 | orchestrator | 2025-05-25 03:57:06 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:06.413601 | orchestrator | 2025-05-25 03:57:06 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:06.415124 | orchestrator | 2025-05-25 03:57:06 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:06.415162 | orchestrator | 2025-05-25 03:57:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:09.457105 | orchestrator | 2025-05-25 03:57:09 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:09.457281 | orchestrator | 2025-05-25 03:57:09 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:09.458279 | orchestrator | 2025-05-25 03:57:09 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:09.458379 | orchestrator | 2025-05-25 03:57:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:12.496387 | orchestrator | 2025-05-25 03:57:12 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:12.496588 | orchestrator | 2025-05-25 03:57:12 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:12.497063 | orchestrator | 2025-05-25 03:57:12 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:12.497096 | orchestrator | 2025-05-25 03:57:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:15.539692 | orchestrator | 2025-05-25 03:57:15 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:15.541360 | orchestrator | 2025-05-25 03:57:15 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:15.542426 | orchestrator | 2025-05-25 03:57:15 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:15.542577 | orchestrator | 2025-05-25 03:57:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:18.577035 | orchestrator | 2025-05-25 03:57:18 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:18.577628 | orchestrator | 2025-05-25 03:57:18 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:18.579400 | orchestrator | 2025-05-25 03:57:18 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:18.579427 | orchestrator | 2025-05-25 03:57:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:21.661123 | orchestrator | 2025-05-25 03:57:21 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:21.662354 | orchestrator | 2025-05-25 03:57:21 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:21.663655 | orchestrator | 2025-05-25 03:57:21 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:21.663973 | orchestrator | 2025-05-25 03:57:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:24.717330 | orchestrator | 2025-05-25 03:57:24 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:24.717462 | orchestrator | 2025-05-25 03:57:24 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:24.717490 | orchestrator | 2025-05-25 03:57:24 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:24.717511 | orchestrator | 2025-05-25 03:57:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:27.768159 | orchestrator | 2025-05-25 03:57:27 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:27.771406 | orchestrator | 2025-05-25 03:57:27 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:27.775098 | orchestrator | 2025-05-25 03:57:27 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:27.775426 | orchestrator | 2025-05-25 03:57:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:30.830799 | orchestrator | 2025-05-25 03:57:30 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:30.830871 | orchestrator | 2025-05-25 03:57:30 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:30.830877 | orchestrator | 2025-05-25 03:57:30 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:30.830882 | orchestrator | 2025-05-25 03:57:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:33.876543 | orchestrator | 2025-05-25 03:57:33 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:33.878636 | orchestrator | 2025-05-25 03:57:33 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:33.881960 | orchestrator | 2025-05-25 03:57:33 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:33.882104 | orchestrator | 2025-05-25 03:57:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:36.936408 | orchestrator | 2025-05-25 03:57:36 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:36.939311 | orchestrator | 2025-05-25 03:57:36 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:36.941395 | orchestrator | 2025-05-25 03:57:36 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:36.942218 | orchestrator | 2025-05-25 03:57:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:39.986440 | orchestrator | 2025-05-25 03:57:39 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:39.989376 | orchestrator | 2025-05-25 03:57:39 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:39.991230 | orchestrator | 2025-05-25 03:57:39 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:39.991290 | orchestrator | 2025-05-25 03:57:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:43.034436 | orchestrator | 2025-05-25 03:57:43 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:43.034547 | orchestrator | 2025-05-25 03:57:43 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:43.034674 | orchestrator | 2025-05-25 03:57:43 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:43.034741 | orchestrator | 2025-05-25 03:57:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:46.075047 | orchestrator | 2025-05-25 03:57:46 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:46.075672 | orchestrator | 2025-05-25 03:57:46 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:46.077230 | orchestrator | 2025-05-25 03:57:46 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:46.077264 | orchestrator | 2025-05-25 03:57:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:49.124730 | orchestrator | 2025-05-25 03:57:49 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:49.125699 | orchestrator | 2025-05-25 03:57:49 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:49.127206 | orchestrator | 2025-05-25 03:57:49 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:49.127392 | orchestrator | 2025-05-25 03:57:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:52.176652 | orchestrator | 2025-05-25 03:57:52 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:52.178328 | orchestrator | 2025-05-25 03:57:52 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:52.180436 | orchestrator | 2025-05-25 03:57:52 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:52.180940 | orchestrator | 2025-05-25 03:57:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:55.230549 | orchestrator | 2025-05-25 03:57:55 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:55.230656 | orchestrator | 2025-05-25 03:57:55 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:55.231030 | orchestrator | 2025-05-25 03:57:55 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:55.231056 | orchestrator | 2025-05-25 03:57:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:57:58.283239 | orchestrator | 2025-05-25 03:57:58 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:57:58.285311 | orchestrator | 2025-05-25 03:57:58 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:57:58.286606 | orchestrator | 2025-05-25 03:57:58 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:57:58.286910 | orchestrator | 2025-05-25 03:57:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:01.344558 | orchestrator | 2025-05-25 03:58:01 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:01.345118 | orchestrator | 2025-05-25 03:58:01 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:01.345507 | orchestrator | 2025-05-25 03:58:01 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:01.345530 | orchestrator | 2025-05-25 03:58:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:04.393726 | orchestrator | 2025-05-25 03:58:04 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:04.395263 | orchestrator | 2025-05-25 03:58:04 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:04.397071 | orchestrator | 2025-05-25 03:58:04 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:04.397201 | orchestrator | 2025-05-25 03:58:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:07.457090 | orchestrator | 2025-05-25 03:58:07 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:07.458449 | orchestrator | 2025-05-25 03:58:07 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:07.462561 | orchestrator | 2025-05-25 03:58:07 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:07.462601 | orchestrator | 2025-05-25 03:58:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:10.507065 | orchestrator | 2025-05-25 03:58:10 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:10.508868 | orchestrator | 2025-05-25 03:58:10 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:10.511849 | orchestrator | 2025-05-25 03:58:10 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:10.512009 | orchestrator | 2025-05-25 03:58:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:13.563972 | orchestrator | 2025-05-25 03:58:13 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:13.564073 | orchestrator | 2025-05-25 03:58:13 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:13.564616 | orchestrator | 2025-05-25 03:58:13 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:13.564790 | orchestrator | 2025-05-25 03:58:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:16.627313 | orchestrator | 2025-05-25 03:58:16 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:16.629150 | orchestrator | 2025-05-25 03:58:16 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:16.631461 | orchestrator | 2025-05-25 03:58:16 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:16.631775 | orchestrator | 2025-05-25 03:58:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:19.682388 | orchestrator | 2025-05-25 03:58:19 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:19.685019 | orchestrator | 2025-05-25 03:58:19 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:19.687631 | orchestrator | 2025-05-25 03:58:19 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:19.687718 | orchestrator | 2025-05-25 03:58:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:22.731187 | orchestrator | 2025-05-25 03:58:22 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:22.732540 | orchestrator | 2025-05-25 03:58:22 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:22.735081 | orchestrator | 2025-05-25 03:58:22 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:22.735104 | orchestrator | 2025-05-25 03:58:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:25.789429 | orchestrator | 2025-05-25 03:58:25 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:25.790990 | orchestrator | 2025-05-25 03:58:25 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:25.793045 | orchestrator | 2025-05-25 03:58:25 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:25.793081 | orchestrator | 2025-05-25 03:58:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:28.843440 | orchestrator | 2025-05-25 03:58:28 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:28.845291 | orchestrator | 2025-05-25 03:58:28 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:28.846709 | orchestrator | 2025-05-25 03:58:28 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:28.846750 | orchestrator | 2025-05-25 03:58:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:31.903102 | orchestrator | 2025-05-25 03:58:31 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:31.904642 | orchestrator | 2025-05-25 03:58:31 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:31.906676 | orchestrator | 2025-05-25 03:58:31 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:31.906718 | orchestrator | 2025-05-25 03:58:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:34.954485 | orchestrator | 2025-05-25 03:58:34 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:34.955895 | orchestrator | 2025-05-25 03:58:34 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:34.959642 | orchestrator | 2025-05-25 03:58:34 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:34.960243 | orchestrator | 2025-05-25 03:58:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:38.012117 | orchestrator | 2025-05-25 03:58:38 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:38.012774 | orchestrator | 2025-05-25 03:58:38 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:38.015388 | orchestrator | 2025-05-25 03:58:38 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:38.015427 | orchestrator | 2025-05-25 03:58:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:41.061424 | orchestrator | 2025-05-25 03:58:41 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:41.062604 | orchestrator | 2025-05-25 03:58:41 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:41.064569 | orchestrator | 2025-05-25 03:58:41 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:41.064593 | orchestrator | 2025-05-25 03:58:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:44.110801 | orchestrator | 2025-05-25 03:58:44 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:44.113659 | orchestrator | 2025-05-25 03:58:44 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:44.115619 | orchestrator | 2025-05-25 03:58:44 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:44.115686 | orchestrator | 2025-05-25 03:58:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:47.169785 | orchestrator | 2025-05-25 03:58:47 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:47.171855 | orchestrator | 2025-05-25 03:58:47 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:47.175053 | orchestrator | 2025-05-25 03:58:47 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:47.175672 | orchestrator | 2025-05-25 03:58:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:50.227036 | orchestrator | 2025-05-25 03:58:50 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:50.229272 | orchestrator | 2025-05-25 03:58:50 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:50.234294 | orchestrator | 2025-05-25 03:58:50 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:50.234357 | orchestrator | 2025-05-25 03:58:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:53.281566 | orchestrator | 2025-05-25 03:58:53 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:53.282140 | orchestrator | 2025-05-25 03:58:53 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:53.283440 | orchestrator | 2025-05-25 03:58:53 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:53.283455 | orchestrator | 2025-05-25 03:58:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:56.333048 | orchestrator | 2025-05-25 03:58:56 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:56.333263 | orchestrator | 2025-05-25 03:58:56 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state STARTED 2025-05-25 03:58:56.335700 | orchestrator | 2025-05-25 03:58:56 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:56.339683 | orchestrator | 2025-05-25 03:58:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:58:59.390863 | orchestrator | 2025-05-25 03:58:59 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:58:59.397606 | orchestrator | 2025-05-25 03:58:59 | INFO  | Task c0214740-d9b7-4bee-98a0-0214c94bbfee is in state SUCCESS 2025-05-25 03:58:59.400096 | orchestrator | 2025-05-25 03:58:59.400139 | orchestrator | 2025-05-25 03:58:59.400152 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-25 03:58:59.400164 | orchestrator | 2025-05-25 03:58:59.400175 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-05-25 03:58:59.400186 | orchestrator | Sunday 25 May 2025 03:47:57 +0000 (0:00:00.756) 0:00:00.756 ************ 2025-05-25 03:58:59.400199 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.400211 | orchestrator | 2025-05-25 03:58:59.400223 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-05-25 03:58:59.400234 | orchestrator | Sunday 25 May 2025 03:47:58 +0000 (0:00:01.252) 0:00:02.008 ************ 2025-05-25 03:58:59.400245 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.400257 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.400268 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.400278 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.400289 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.400300 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.400310 | orchestrator | 2025-05-25 03:58:59.400321 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-05-25 03:58:59.400332 | orchestrator | Sunday 25 May 2025 03:47:59 +0000 (0:00:01.396) 0:00:03.405 ************ 2025-05-25 03:58:59.400343 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.400354 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.400365 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.400375 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.400386 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.400396 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.400407 | orchestrator | 2025-05-25 03:58:59.400418 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-05-25 03:58:59.400429 | orchestrator | Sunday 25 May 2025 03:48:00 +0000 (0:00:00.778) 0:00:04.184 ************ 2025-05-25 03:58:59.400440 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.400450 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.400461 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.400471 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.400482 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.400493 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.400503 | orchestrator | 2025-05-25 03:58:59.400520 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-05-25 03:58:59.400540 | orchestrator | Sunday 25 May 2025 03:48:01 +0000 (0:00:00.812) 0:00:04.996 ************ 2025-05-25 03:58:59.401253 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.401270 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.401291 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.401600 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.401629 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.401644 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.401663 | orchestrator | 2025-05-25 03:58:59.401682 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-05-25 03:58:59.401700 | orchestrator | Sunday 25 May 2025 03:48:02 +0000 (0:00:00.768) 0:00:05.765 ************ 2025-05-25 03:58:59.401718 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.402961 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.403009 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.403028 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.404020 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.404109 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.404136 | orchestrator | 2025-05-25 03:58:59.404158 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-05-25 03:58:59.404178 | orchestrator | Sunday 25 May 2025 03:48:02 +0000 (0:00:00.547) 0:00:06.313 ************ 2025-05-25 03:58:59.404254 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.404274 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.404293 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.404441 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.404463 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.404482 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.404500 | orchestrator | 2025-05-25 03:58:59.404518 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-05-25 03:58:59.404535 | orchestrator | Sunday 25 May 2025 03:48:03 +0000 (0:00:00.723) 0:00:07.037 ************ 2025-05-25 03:58:59.404548 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.404561 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.404574 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.404585 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.404598 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.404615 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.404652 | orchestrator | 2025-05-25 03:58:59.404666 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-05-25 03:58:59.404679 | orchestrator | Sunday 25 May 2025 03:48:04 +0000 (0:00:00.722) 0:00:07.759 ************ 2025-05-25 03:58:59.404692 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.404704 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.404717 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.404730 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.404750 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.404771 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.404790 | orchestrator | 2025-05-25 03:58:59.404889 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-25 03:58:59.405056 | orchestrator | Sunday 25 May 2025 03:48:05 +0000 (0:00:00.902) 0:00:08.662 ************ 2025-05-25 03:58:59.405080 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 03:58:59.405100 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 03:58:59.405120 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 03:58:59.405250 | orchestrator | 2025-05-25 03:58:59.405282 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-05-25 03:58:59.405302 | orchestrator | Sunday 25 May 2025 03:48:05 +0000 (0:00:00.839) 0:00:09.502 ************ 2025-05-25 03:58:59.405320 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.405331 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.405341 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.405350 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.405360 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.405370 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.405379 | orchestrator | 2025-05-25 03:58:59.405404 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-05-25 03:58:59.405415 | orchestrator | Sunday 25 May 2025 03:48:07 +0000 (0:00:01.064) 0:00:10.566 ************ 2025-05-25 03:58:59.405424 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 03:58:59.405434 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 03:58:59.405450 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 03:58:59.405481 | orchestrator | 2025-05-25 03:58:59.405492 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-05-25 03:58:59.405516 | orchestrator | Sunday 25 May 2025 03:48:09 +0000 (0:00:02.652) 0:00:13.219 ************ 2025-05-25 03:58:59.405526 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 03:58:59.405536 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 03:58:59.405545 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 03:58:59.405554 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.405564 | orchestrator | 2025-05-25 03:58:59.405573 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-05-25 03:58:59.405583 | orchestrator | Sunday 25 May 2025 03:48:10 +0000 (0:00:00.946) 0:00:14.166 ************ 2025-05-25 03:58:59.405594 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.405607 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.405617 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.405627 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.405636 | orchestrator | 2025-05-25 03:58:59.405646 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-05-25 03:58:59.405655 | orchestrator | Sunday 25 May 2025 03:48:11 +0000 (0:00:00.854) 0:00:15.021 ************ 2025-05-25 03:58:59.405667 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.405679 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.405689 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.405699 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.405708 | orchestrator | 2025-05-25 03:58:59.405718 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-05-25 03:58:59.405727 | orchestrator | Sunday 25 May 2025 03:48:11 +0000 (0:00:00.432) 0:00:15.453 ************ 2025-05-25 03:58:59.405745 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-25 03:48:07.586571', 'end': '2025-05-25 03:48:07.842089', 'delta': '0:00:00.255518', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.405775 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-25 03:48:08.554416', 'end': '2025-05-25 03:48:08.814940', 'delta': '0:00:00.260524', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.405786 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-25 03:48:09.260408', 'end': '2025-05-25 03:48:09.518906', 'delta': '0:00:00.258498', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.405797 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.405806 | orchestrator | 2025-05-25 03:58:59.405816 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-05-25 03:58:59.405826 | orchestrator | Sunday 25 May 2025 03:48:12 +0000 (0:00:00.220) 0:00:15.674 ************ 2025-05-25 03:58:59.405835 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.405845 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.405855 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.405864 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.405873 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.405883 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.405892 | orchestrator | 2025-05-25 03:58:59.405922 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-05-25 03:58:59.405940 | orchestrator | Sunday 25 May 2025 03:48:13 +0000 (0:00:01.503) 0:00:17.177 ************ 2025-05-25 03:58:59.405953 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.405963 | orchestrator | 2025-05-25 03:58:59.405972 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-05-25 03:58:59.405982 | orchestrator | Sunday 25 May 2025 03:48:14 +0000 (0:00:00.659) 0:00:17.837 ************ 2025-05-25 03:58:59.405991 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.406001 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.406010 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.406065 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.406075 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.406086 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.406095 | orchestrator | 2025-05-25 03:58:59.406105 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-05-25 03:58:59.406114 | orchestrator | Sunday 25 May 2025 03:48:15 +0000 (0:00:01.113) 0:00:18.951 ************ 2025-05-25 03:58:59.406124 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.406134 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.406143 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.406152 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.406162 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.406171 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.406180 | orchestrator | 2025-05-25 03:58:59.406190 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-25 03:58:59.406207 | orchestrator | Sunday 25 May 2025 03:48:17 +0000 (0:00:01.585) 0:00:20.536 ************ 2025-05-25 03:58:59.406217 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.406226 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.406236 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.406245 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.406255 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.406264 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.406273 | orchestrator | 2025-05-25 03:58:59.406283 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-05-25 03:58:59.406293 | orchestrator | Sunday 25 May 2025 03:48:17 +0000 (0:00:00.925) 0:00:21.461 ************ 2025-05-25 03:58:59.406302 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.406311 | orchestrator | 2025-05-25 03:58:59.406326 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-05-25 03:58:59.406336 | orchestrator | Sunday 25 May 2025 03:48:18 +0000 (0:00:00.219) 0:00:21.680 ************ 2025-05-25 03:58:59.406346 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.406355 | orchestrator | 2025-05-25 03:58:59.406365 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-25 03:58:59.406375 | orchestrator | Sunday 25 May 2025 03:48:18 +0000 (0:00:00.304) 0:00:21.985 ************ 2025-05-25 03:58:59.406384 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.406394 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.406403 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.406413 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.406422 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.406432 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.406450 | orchestrator | 2025-05-25 03:58:59.406468 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-05-25 03:58:59.406495 | orchestrator | Sunday 25 May 2025 03:48:19 +0000 (0:00:00.766) 0:00:22.751 ************ 2025-05-25 03:58:59.406513 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.406530 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.406543 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.406553 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.406562 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.406572 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.406581 | orchestrator | 2025-05-25 03:58:59.406590 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-05-25 03:58:59.406600 | orchestrator | Sunday 25 May 2025 03:48:20 +0000 (0:00:01.358) 0:00:24.110 ************ 2025-05-25 03:58:59.406609 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.406619 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.406628 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.406637 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.406646 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.406656 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.406665 | orchestrator | 2025-05-25 03:58:59.406674 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-05-25 03:58:59.406684 | orchestrator | Sunday 25 May 2025 03:48:21 +0000 (0:00:01.098) 0:00:25.208 ************ 2025-05-25 03:58:59.406693 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.406703 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.406712 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.406721 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.406731 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.406740 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.406749 | orchestrator | 2025-05-25 03:58:59.406759 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-05-25 03:58:59.406768 | orchestrator | Sunday 25 May 2025 03:48:22 +0000 (0:00:01.073) 0:00:26.282 ************ 2025-05-25 03:58:59.406778 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.406795 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.406804 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.406813 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.406823 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.406832 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.406841 | orchestrator | 2025-05-25 03:58:59.406851 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-05-25 03:58:59.406860 | orchestrator | Sunday 25 May 2025 03:48:23 +0000 (0:00:00.700) 0:00:26.983 ************ 2025-05-25 03:58:59.406870 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.406879 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.406888 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.406898 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.406927 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.406937 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.406946 | orchestrator | 2025-05-25 03:58:59.406956 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-25 03:58:59.406971 | orchestrator | Sunday 25 May 2025 03:48:24 +0000 (0:00:00.674) 0:00:27.657 ************ 2025-05-25 03:58:59.406988 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.407006 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.407024 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.407041 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.407058 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.407068 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.407078 | orchestrator | 2025-05-25 03:58:59.407088 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-05-25 03:58:59.407097 | orchestrator | Sunday 25 May 2025 03:48:24 +0000 (0:00:00.655) 0:00:28.312 ************ 2025-05-25 03:58:59.407107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f', 'scsi-SQEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.407250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.407268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5', 'scsi-SQEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5-part1', 'scsi-SQEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5-part14', 'scsi-SQEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5-part15', 'scsi-SQEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5-part16', 'scsi-SQEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.407377 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.407388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.407398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407499 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.407515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7', 'scsi-SQEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7-part1', 'scsi-SQEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7-part14', 'scsi-SQEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7-part15', 'scsi-SQEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7-part16', 'scsi-SQEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.407537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.407548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--02f362e7--7983--50b5--b688--a41104a01860-osd--block--02f362e7--7983--50b5--b688--a41104a01860', 'dm-uuid-LVM-EIto835nqPIkh0oeoEL0S8DBvWlfCbl8H8re0YIsYzAQqybZRNhTB6UMYipVoexk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b24cffad--8a1f--50fd--b816--ada28c3c4ac7-osd--block--b24cffad--8a1f--50fd--b816--ada28c3c4ac7', 'dm-uuid-LVM-tFlPNDaJrKb6B5eh5v1xX1ivLAW9n1dXQLABeBBDprsmjK9bFTwfkVwlCJsQ0XuP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407608 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.407622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--02ca1cf7--fa58--5bc0--a798--b7d21582c1b0-osd--block--02ca1cf7--fa58--5bc0--a798--b7d21582c1b0', 'dm-uuid-LVM-HAu4Vl80XjNgQGqZh3sFVXfBzfGDPBzt7M61G9WS93n3QFc52Avm05aFGbBLGJsF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--733a1394--dd45--5d63--8d82--63858202edf3-osd--block--733a1394--dd45--5d63--8d82--63858202edf3', 'dm-uuid-LVM-N0DNQ7QOeq8qzVSMsTYiekiqreuPz8LqqVLZhgTOTRYilPVNBZZGNH3uHj9wCjop'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33e996ff--67e1--5789--9eb3--97043475c088-osd--block--33e996ff--67e1--5789--9eb3--97043475c088', 'dm-uuid-LVM-f2mxDkg5RboGiSFRnoZoE0Jf5zdoZooLX3dEGjd0x3LIyAGja8yP08lNRRkeYga4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ece5568--3437--595e--b3ba--b2f91a77c86c-osd--block--3ece5568--3437--595e--b3ba--b2f91a77c86c', 'dm-uuid-LVM-M0xTfxjiXljnWhv0xWS2ZQJ2ZEKwlMtX3setecTbz5KjpidltETUQJYINQ7cMcdk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part1', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part14', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part15', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part16', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.407755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407834 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--02f362e7--7983--50b5--b688--a41104a01860-osd--block--02f362e7--7983--50b5--b688--a41104a01860'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qtdq0z-8J15-gP85-P9SJ-dK07-zWb5-0DnwzK', 'scsi-0QEMU_QEMU_HARDDISK_cdfa8505-de86-48ff-8ed6-b6e1381a94b2', 'scsi-SQEMU_QEMU_HARDDISK_cdfa8505-de86-48ff-8ed6-b6e1381a94b2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.407855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.407992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b24cffad--8a1f--50fd--b816--ada28c3c4ac7-osd--block--b24cffad--8a1f--50fd--b816--ada28c3c4ac7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zFQRk7-wHUy-2Er2-kQSV-Uuzs-Y07c-0XeRqW', 'scsi-0QEMU_QEMU_HARDDISK_4276f8fa-1a41-4d3c-8190-a1d2d3b80049', 'scsi-SQEMU_QEMU_HARDDISK_4276f8fa-1a41-4d3c-8190-a1d2d3b80049'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.408003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.408014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.408031 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 03:58:59.408047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dac67b12-4a3b-49b0-a18f-dd9740769fda', 'scsi-SQEMU_QEMU_HARDDISK_dac67b12-4a3b-49b0-a18f-dd9740769fda'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.408064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--33e996ff--67e1--5789--9eb3--97043475c088-osd--block--33e996ff--67e1--5789--9eb3--97043475c088'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yI0x8n-xcOR-DPeb-Offp-taab-jv40-D8CklK', 'scsi-0QEMU_QEMU_HARDDISK_201f277c-fdb2-416e-b305-0d8ba90b32cd', 'scsi-SQEMU_QEMU_HARDDISK_201f277c-fdb2-416e-b305-0d8ba90b32cd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.408076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part1', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part14', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part15', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part16', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.408100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3ece5568--3437--595e--b3ba--b2f91a77c86c-osd--block--3ece5568--3437--595e--b3ba--b2f91a77c86c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TOpkSP-RmlJ-8nES-992L-XmPw-19k6-xzW621', 'scsi-0QEMU_QEMU_HARDDISK_8968a7f7-851b-405b-80f4-de48ab1dffee', 'scsi-SQEMU_QEMU_HARDDISK_8968a7f7-851b-405b-80f4-de48ab1dffee'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.408116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.408126 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--02ca1cf7--fa58--5bc0--a798--b7d21582c1b0-osd--block--02ca1cf7--fa58--5bc0--a798--b7d21582c1b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-75K85T-qtHF-V2PQ-3keF-McNZ-rYMq-djrq4j', 'scsi-0QEMU_QEMU_HARDDISK_17d1c6f1-1305-4025-b6c8-ee1be555c001', 'scsi-SQEMU_QEMU_HARDDISK_17d1c6f1-1305-4025-b6c8-ee1be555c001'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.408136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_603d0154-8a06-450e-a743-756d85b1bc6a', 'scsi-SQEMU_QEMU_HARDDISK_603d0154-8a06-450e-a743-756d85b1bc6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.408146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--733a1394--dd45--5d63--8d82--63858202edf3-osd--block--733a1394--dd45--5d63--8d82--63858202edf3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XBC6Ra-aDc7-yze2-aQJ9-K1bq-dbNG-W4h3yL', 'scsi-0QEMU_QEMU_HARDDISK_b0e50223-c4d0-48f7-a5f8-d1963b067c82', 'scsi-SQEMU_QEMU_HARDDISK_b0e50223-c4d0-48f7-a5f8-d1963b067c82'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.408157 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.408172 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.408187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e86a76-d592-4447-9c79-2151d2192c3f', 'scsi-SQEMU_QEMU_HARDDISK_38e86a76-d592-4447-9c79-2151d2192c3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.408197 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.408207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 03:58:59.408222 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.408232 | orchestrator | 2025-05-25 03:58:59.408242 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-05-25 03:58:59.408252 | orchestrator | Sunday 25 May 2025 03:48:26 +0000 (0:00:01.906) 0:00:30.219 ************ 2025-05-25 03:58:59.408262 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408273 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408283 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408298 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408309 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408323 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408339 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408350 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408361 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f', 'scsi-SQEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a4e249f-8a05-4326-b566-23f41d92ff9f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408385 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408394 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.408402 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408410 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408418 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408431 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408440 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408451 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408465 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408474 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408483 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5', 'scsi-SQEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5-part1', 'scsi-SQEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5-part14', 'scsi-SQEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5-part15', 'scsi-SQEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5-part16', 'scsi-SQEMU_QEMU_HARDDISK_e659d218-d092-4e32-8aa6-14fd719ec7d5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408500 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408508 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.408521 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408530 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408538 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408554 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408562 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408573 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408724 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408745 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408759 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7', 'scsi-SQEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7-part1', 'scsi-SQEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7-part14', 'scsi-SQEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7-part15', 'scsi-SQEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7-part16', 'scsi-SQEMU_QEMU_HARDDISK_87443a00-a40d-492e-8034-179827711ad7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408790 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408813 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--02f362e7--7983--50b5--b688--a41104a01860-osd--block--02f362e7--7983--50b5--b688--a41104a01860', 'dm-uuid-LVM-EIto835nqPIkh0oeoEL0S8DBvWlfCbl8H8re0YIsYzAQqybZRNhTB6UMYipVoexk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408826 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b24cffad--8a1f--50fd--b816--ada28c3c4ac7-osd--block--b24cffad--8a1f--50fd--b816--ada28c3c4ac7', 'dm-uuid-LVM-tFlPNDaJrKb6B5eh5v1xX1ivLAW9n1dXQLABeBBDprsmjK9bFTwfkVwlCJsQ0XuP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408843 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408852 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408860 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408868 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.408880 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408893 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408922 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408931 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408944 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408953 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--02ca1cf7--fa58--5bc0--a798--b7d21582c1b0-osd--block--02ca1cf7--fa58--5bc0--a798--b7d21582c1b0', 'dm-uuid-LVM-HAu4Vl80XjNgQGqZh3sFVXfBzfGDPBzt7M61G9WS93n3QFc52Avm05aFGbBLGJsF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408972 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part1', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part14', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part15', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part16', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408987 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--733a1394--dd45--5d63--8d82--63858202edf3-osd--block--733a1394--dd45--5d63--8d82--63858202edf3', 'dm-uuid-LVM-N0DNQ7QOeq8qzVSMsTYiekiqreuPz8LqqVLZhgTOTRYilPVNBZZGNH3uHj9wCjop'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.408996 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--02f362e7--7983--50b5--b688--a41104a01860-osd--block--02f362e7--7983--50b5--b688--a41104a01860'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qtdq0z-8J15-gP85-P9SJ-dK07-zWb5-0DnwzK', 'scsi-0QEMU_QEMU_HARDDISK_cdfa8505-de86-48ff-8ed6-b6e1381a94b2', 'scsi-SQEMU_QEMU_HARDDISK_cdfa8505-de86-48ff-8ed6-b6e1381a94b2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409009 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409022 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b24cffad--8a1f--50fd--b816--ada28c3c4ac7-osd--block--b24cffad--8a1f--50fd--b816--ada28c3c4ac7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zFQRk7-wHUy-2Er2-kQSV-Uuzs-Y07c-0XeRqW', 'scsi-0QEMU_QEMU_HARDDISK_4276f8fa-1a41-4d3c-8190-a1d2d3b80049', 'scsi-SQEMU_QEMU_HARDDISK_4276f8fa-1a41-4d3c-8190-a1d2d3b80049'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409031 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409044 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dac67b12-4a3b-49b0-a18f-dd9740769fda', 'scsi-SQEMU_QEMU_HARDDISK_dac67b12-4a3b-49b0-a18f-dd9740769fda'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409053 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409061 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409070 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.409082 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409094 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409103 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33e996ff--67e1--5789--9eb3--97043475c088-osd--block--33e996ff--67e1--5789--9eb3--97043475c088', 'dm-uuid-LVM-f2mxDkg5RboGiSFRnoZoE0Jf5zdoZooLX3dEGjd0x3LIyAGja8yP08lNRRkeYga4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409125 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ece5568--3437--595e--b3ba--b2f91a77c86c-osd--block--3ece5568--3437--595e--b3ba--b2f91a77c86c', 'dm-uuid-LVM-M0xTfxjiXljnWhv0xWS2ZQJ2ZEKwlMtX3setecTbz5KjpidltETUQJYINQ7cMcdk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409186 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409194 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409207 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409221 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409230 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409244 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409252 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409260 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409268 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409280 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409295 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409309 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part1', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part14', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part15', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part16', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409346 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--33e996ff--67e1--5789--9eb3--97043475c088-osd--block--33e996ff--67e1--5789--9eb3--97043475c088'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yI0x8n-xcOR-DPeb-Offp-taab-jv40-D8CklK', 'scsi-0QEMU_QEMU_HARDDISK_201f277c-fdb2-416e-b305-0d8ba90b32cd', 'scsi-SQEMU_QEMU_HARDDISK_201f277c-fdb2-416e-b305-0d8ba90b32cd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409356 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--02ca1cf7--fa58--5bc0--a798--b7d21582c1b0-osd--block--02ca1cf7--fa58--5bc0--a798--b7d21582c1b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-75K85T-qtHF-V2PQ-3keF-McNZ-rYMq-djrq4j', 'scsi-0QEMU_QEMU_HARDDISK_17d1c6f1-1305-4025-b6c8-ee1be555c001', 'scsi-SQEMU_QEMU_HARDDISK_17d1c6f1-1305-4025-b6c8-ee1be555c001'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409373 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3ece5568--3437--595e--b3ba--b2f91a77c86c-osd--block--3ece5568--3437--595e--b3ba--b2f91a77c86c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TOpkSP-RmlJ-8nES-992L-XmPw-19k6-xzW621', 'scsi-0QEMU_QEMU_HARDDISK_8968a7f7-851b-405b-80f4-de48ab1dffee', 'scsi-SQEMU_QEMU_HARDDISK_8968a7f7-851b-405b-80f4-de48ab1dffee'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409389 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_603d0154-8a06-450e-a743-756d85b1bc6a', 'scsi-SQEMU_QEMU_HARDDISK_603d0154-8a06-450e-a743-756d85b1bc6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409403 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--733a1394--dd45--5d63--8d82--63858202edf3-osd--block--733a1394--dd45--5d63--8d82--63858202edf3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XBC6Ra-aDc7-yze2-aQJ9-K1bq-dbNG-W4h3yL', 'scsi-0QEMU_QEMU_HARDDISK_b0e50223-c4d0-48f7-a5f8-d1963b067c82', 'scsi-SQEMU_QEMU_HARDDISK_b0e50223-c4d0-48f7-a5f8-d1963b067c82'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409411 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e86a76-d592-4447-9c79-2151d2192c3f', 'scsi-SQEMU_QEMU_HARDDISK_38e86a76-d592-4447-9c79-2151d2192c3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409420 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409432 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 03:58:59.409440 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.409448 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.409456 | orchestrator | 2025-05-25 03:58:59.409469 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-05-25 03:58:59.409477 | orchestrator | Sunday 25 May 2025 03:48:28 +0000 (0:00:01.524) 0:00:31.744 ************ 2025-05-25 03:58:59.409485 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.409493 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.409500 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.409512 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.409520 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.409528 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.409535 | orchestrator | 2025-05-25 03:58:59.409543 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-05-25 03:58:59.409551 | orchestrator | Sunday 25 May 2025 03:48:29 +0000 (0:00:01.315) 0:00:33.059 ************ 2025-05-25 03:58:59.409558 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.409566 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.409574 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.409582 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.409589 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.409597 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.409604 | orchestrator | 2025-05-25 03:58:59.409612 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-25 03:58:59.409620 | orchestrator | Sunday 25 May 2025 03:48:30 +0000 (0:00:00.970) 0:00:34.030 ************ 2025-05-25 03:58:59.409628 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.409636 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.409644 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.409652 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.409659 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.409667 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.409675 | orchestrator | 2025-05-25 03:58:59.409682 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-25 03:58:59.409690 | orchestrator | Sunday 25 May 2025 03:48:31 +0000 (0:00:00.934) 0:00:34.965 ************ 2025-05-25 03:58:59.409698 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.409706 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.409713 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.409721 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.409729 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.409736 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.409744 | orchestrator | 2025-05-25 03:58:59.409752 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-25 03:58:59.409760 | orchestrator | Sunday 25 May 2025 03:48:31 +0000 (0:00:00.412) 0:00:35.378 ************ 2025-05-25 03:58:59.409768 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.409775 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.409783 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.409790 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.409798 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.409806 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.409813 | orchestrator | 2025-05-25 03:58:59.409821 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-25 03:58:59.409829 | orchestrator | Sunday 25 May 2025 03:48:32 +0000 (0:00:00.955) 0:00:36.333 ************ 2025-05-25 03:58:59.409837 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.409845 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.409852 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.409860 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.409868 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.409876 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.409883 | orchestrator | 2025-05-25 03:58:59.409891 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-05-25 03:58:59.409899 | orchestrator | Sunday 25 May 2025 03:48:33 +0000 (0:00:00.963) 0:00:37.297 ************ 2025-05-25 03:58:59.409922 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 03:58:59.409935 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-25 03:58:59.409943 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-25 03:58:59.409950 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-25 03:58:59.409958 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-25 03:58:59.409966 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-25 03:58:59.409974 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-25 03:58:59.409981 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-25 03:58:59.409989 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-25 03:58:59.409997 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-25 03:58:59.410004 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-25 03:58:59.410012 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-25 03:58:59.410047 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-25 03:58:59.410055 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-25 03:58:59.410063 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-25 03:58:59.410070 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-25 03:58:59.410078 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-25 03:58:59.410086 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-25 03:58:59.410093 | orchestrator | 2025-05-25 03:58:59.410101 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-05-25 03:58:59.410109 | orchestrator | Sunday 25 May 2025 03:48:37 +0000 (0:00:03.764) 0:00:41.062 ************ 2025-05-25 03:58:59.410117 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 03:58:59.410125 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 03:58:59.410132 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 03:58:59.410140 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.410148 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-25 03:58:59.410155 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-25 03:58:59.410163 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-25 03:58:59.410171 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.410179 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-25 03:58:59.410186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-25 03:58:59.410194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-25 03:58:59.410202 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-25 03:58:59.410215 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-25 03:58:59.410223 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-25 03:58:59.410230 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.410238 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-25 03:58:59.410246 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-25 03:58:59.410254 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-25 03:58:59.410261 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.410269 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.410277 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-25 03:58:59.410284 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-25 03:58:59.410292 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-25 03:58:59.410300 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.410307 | orchestrator | 2025-05-25 03:58:59.410315 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-05-25 03:58:59.410323 | orchestrator | Sunday 25 May 2025 03:48:38 +0000 (0:00:00.892) 0:00:41.954 ************ 2025-05-25 03:58:59.410331 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.410345 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.410353 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.410361 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.410369 | orchestrator | 2025-05-25 03:58:59.410377 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-25 03:58:59.410385 | orchestrator | Sunday 25 May 2025 03:48:39 +0000 (0:00:01.016) 0:00:42.971 ************ 2025-05-25 03:58:59.410393 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.410401 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.410409 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.410416 | orchestrator | 2025-05-25 03:58:59.410424 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-25 03:58:59.410432 | orchestrator | Sunday 25 May 2025 03:48:39 +0000 (0:00:00.439) 0:00:43.411 ************ 2025-05-25 03:58:59.410440 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.410448 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.410455 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.410463 | orchestrator | 2025-05-25 03:58:59.410471 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-25 03:58:59.410479 | orchestrator | Sunday 25 May 2025 03:48:40 +0000 (0:00:00.583) 0:00:43.994 ************ 2025-05-25 03:58:59.410487 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.410494 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.410502 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.410510 | orchestrator | 2025-05-25 03:58:59.410518 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-25 03:58:59.410526 | orchestrator | Sunday 25 May 2025 03:48:40 +0000 (0:00:00.388) 0:00:44.382 ************ 2025-05-25 03:58:59.410534 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.410541 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.410549 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.410557 | orchestrator | 2025-05-25 03:58:59.410565 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-25 03:58:59.410601 | orchestrator | Sunday 25 May 2025 03:48:41 +0000 (0:00:00.523) 0:00:44.906 ************ 2025-05-25 03:58:59.410609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 03:58:59.410617 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 03:58:59.410625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 03:58:59.410633 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.410641 | orchestrator | 2025-05-25 03:58:59.410649 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-25 03:58:59.410657 | orchestrator | Sunday 25 May 2025 03:48:41 +0000 (0:00:00.355) 0:00:45.262 ************ 2025-05-25 03:58:59.410664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 03:58:59.410672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 03:58:59.410680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 03:58:59.410688 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.410695 | orchestrator | 2025-05-25 03:58:59.410703 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-25 03:58:59.410711 | orchestrator | Sunday 25 May 2025 03:48:42 +0000 (0:00:00.339) 0:00:45.601 ************ 2025-05-25 03:58:59.410719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 03:58:59.410727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 03:58:59.410734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 03:58:59.410742 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.410750 | orchestrator | 2025-05-25 03:58:59.410761 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-25 03:58:59.410775 | orchestrator | Sunday 25 May 2025 03:48:42 +0000 (0:00:00.508) 0:00:46.110 ************ 2025-05-25 03:58:59.410783 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.410791 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.410799 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.410807 | orchestrator | 2025-05-25 03:58:59.410814 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-25 03:58:59.410822 | orchestrator | Sunday 25 May 2025 03:48:43 +0000 (0:00:00.756) 0:00:46.867 ************ 2025-05-25 03:58:59.410830 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-25 03:58:59.410838 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-25 03:58:59.410846 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-25 03:58:59.410854 | orchestrator | 2025-05-25 03:58:59.410861 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-05-25 03:58:59.410869 | orchestrator | Sunday 25 May 2025 03:48:44 +0000 (0:00:00.924) 0:00:47.791 ************ 2025-05-25 03:58:59.410881 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 03:58:59.410890 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 03:58:59.410898 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 03:58:59.410949 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-25 03:58:59.410958 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-25 03:58:59.410966 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-25 03:58:59.410973 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-25 03:58:59.410981 | orchestrator | 2025-05-25 03:58:59.410989 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-05-25 03:58:59.410997 | orchestrator | Sunday 25 May 2025 03:48:45 +0000 (0:00:01.411) 0:00:49.203 ************ 2025-05-25 03:58:59.411005 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 03:58:59.411012 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 03:58:59.411020 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 03:58:59.411028 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-25 03:58:59.411036 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-25 03:58:59.411044 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-25 03:58:59.411051 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-25 03:58:59.411059 | orchestrator | 2025-05-25 03:58:59.411067 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-25 03:58:59.411075 | orchestrator | Sunday 25 May 2025 03:48:48 +0000 (0:00:02.408) 0:00:51.611 ************ 2025-05-25 03:58:59.411083 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.411091 | orchestrator | 2025-05-25 03:58:59.411099 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-25 03:58:59.411107 | orchestrator | Sunday 25 May 2025 03:48:49 +0000 (0:00:01.305) 0:00:52.916 ************ 2025-05-25 03:58:59.411115 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.411123 | orchestrator | 2025-05-25 03:58:59.411130 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-25 03:58:59.411138 | orchestrator | Sunday 25 May 2025 03:48:50 +0000 (0:00:01.406) 0:00:54.323 ************ 2025-05-25 03:58:59.411144 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.411156 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.411163 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.411169 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.411176 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.411182 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.411189 | orchestrator | 2025-05-25 03:58:59.411196 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-25 03:58:59.411202 | orchestrator | Sunday 25 May 2025 03:48:51 +0000 (0:00:01.037) 0:00:55.360 ************ 2025-05-25 03:58:59.411209 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.411215 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.411222 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.411228 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.411235 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.411242 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.411248 | orchestrator | 2025-05-25 03:58:59.411255 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-25 03:58:59.411262 | orchestrator | Sunday 25 May 2025 03:48:53 +0000 (0:00:01.631) 0:00:56.991 ************ 2025-05-25 03:58:59.411268 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.411275 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.411281 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.411288 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.411294 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.411301 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.411308 | orchestrator | 2025-05-25 03:58:59.411314 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-25 03:58:59.411321 | orchestrator | Sunday 25 May 2025 03:48:54 +0000 (0:00:01.127) 0:00:58.119 ************ 2025-05-25 03:58:59.411328 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.411334 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.411344 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.411351 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.411358 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.411364 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.411371 | orchestrator | 2025-05-25 03:58:59.411377 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-25 03:58:59.411384 | orchestrator | Sunday 25 May 2025 03:48:55 +0000 (0:00:01.179) 0:00:59.299 ************ 2025-05-25 03:58:59.411390 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.411397 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.411403 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.411410 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.411416 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.411423 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.411430 | orchestrator | 2025-05-25 03:58:59.411436 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-25 03:58:59.411443 | orchestrator | Sunday 25 May 2025 03:48:56 +0000 (0:00:01.169) 0:01:00.468 ************ 2025-05-25 03:58:59.411453 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.411460 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.411466 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.411473 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.411480 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.411486 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.411493 | orchestrator | 2025-05-25 03:58:59.411500 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-25 03:58:59.411506 | orchestrator | Sunday 25 May 2025 03:48:57 +0000 (0:00:00.616) 0:01:01.085 ************ 2025-05-25 03:58:59.411513 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.411519 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.411526 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.411532 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.411539 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.411551 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.411557 | orchestrator | 2025-05-25 03:58:59.411564 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-25 03:58:59.411571 | orchestrator | Sunday 25 May 2025 03:48:58 +0000 (0:00:01.048) 0:01:02.133 ************ 2025-05-25 03:58:59.411577 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.411584 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.411591 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.411597 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.411604 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.411610 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.411617 | orchestrator | 2025-05-25 03:58:59.411623 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-25 03:58:59.411630 | orchestrator | Sunday 25 May 2025 03:49:00 +0000 (0:00:01.423) 0:01:03.557 ************ 2025-05-25 03:58:59.411637 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.411643 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.411650 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.411656 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.411663 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.411669 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.411676 | orchestrator | 2025-05-25 03:58:59.411683 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-25 03:58:59.411689 | orchestrator | Sunday 25 May 2025 03:49:01 +0000 (0:00:01.806) 0:01:05.363 ************ 2025-05-25 03:58:59.411696 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.411702 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.411709 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.411715 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.411722 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.411729 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.411735 | orchestrator | 2025-05-25 03:58:59.411742 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-25 03:58:59.411748 | orchestrator | Sunday 25 May 2025 03:49:02 +0000 (0:00:00.805) 0:01:06.169 ************ 2025-05-25 03:58:59.411755 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.411762 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.411768 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.411775 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.411781 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.411788 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.411794 | orchestrator | 2025-05-25 03:58:59.411801 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-25 03:58:59.411808 | orchestrator | Sunday 25 May 2025 03:49:03 +0000 (0:00:00.898) 0:01:07.067 ************ 2025-05-25 03:58:59.411814 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.411821 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.411827 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.411834 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.411841 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.411847 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.411854 | orchestrator | 2025-05-25 03:58:59.411860 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-25 03:58:59.411867 | orchestrator | Sunday 25 May 2025 03:49:04 +0000 (0:00:00.643) 0:01:07.710 ************ 2025-05-25 03:58:59.411874 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.411880 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.411887 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.411893 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.411900 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.411920 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.411927 | orchestrator | 2025-05-25 03:58:59.411934 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-25 03:58:59.411941 | orchestrator | Sunday 25 May 2025 03:49:05 +0000 (0:00:01.128) 0:01:08.839 ************ 2025-05-25 03:58:59.411953 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.411960 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.411966 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.411973 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.411979 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.411986 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.411993 | orchestrator | 2025-05-25 03:58:59.411999 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-25 03:58:59.412006 | orchestrator | Sunday 25 May 2025 03:49:06 +0000 (0:00:00.761) 0:01:09.601 ************ 2025-05-25 03:58:59.412013 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.412023 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.412029 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.412036 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.412043 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.412049 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.412056 | orchestrator | 2025-05-25 03:58:59.412063 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-25 03:58:59.412069 | orchestrator | Sunday 25 May 2025 03:49:06 +0000 (0:00:00.825) 0:01:10.426 ************ 2025-05-25 03:58:59.412076 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.412082 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.412089 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.412096 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.412102 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.412109 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.412115 | orchestrator | 2025-05-25 03:58:59.412122 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-25 03:58:59.412132 | orchestrator | Sunday 25 May 2025 03:49:07 +0000 (0:00:00.574) 0:01:11.001 ************ 2025-05-25 03:58:59.412139 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.412145 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.412152 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.412159 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.412165 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.412172 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.412178 | orchestrator | 2025-05-25 03:58:59.412185 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-25 03:58:59.412192 | orchestrator | Sunday 25 May 2025 03:49:08 +0000 (0:00:00.749) 0:01:11.751 ************ 2025-05-25 03:58:59.412198 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.412205 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.412211 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.412218 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.412225 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.412231 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.412238 | orchestrator | 2025-05-25 03:58:59.412245 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-25 03:58:59.412251 | orchestrator | Sunday 25 May 2025 03:49:08 +0000 (0:00:00.551) 0:01:12.302 ************ 2025-05-25 03:58:59.412258 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.412265 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.412272 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.412278 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.412285 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.412291 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.412298 | orchestrator | 2025-05-25 03:58:59.412304 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-05-25 03:58:59.412311 | orchestrator | Sunday 25 May 2025 03:49:09 +0000 (0:00:01.168) 0:01:13.471 ************ 2025-05-25 03:58:59.412318 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.412325 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.412331 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.412338 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.412349 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.412356 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.412362 | orchestrator | 2025-05-25 03:58:59.412369 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-05-25 03:58:59.412376 | orchestrator | Sunday 25 May 2025 03:49:11 +0000 (0:00:01.579) 0:01:15.050 ************ 2025-05-25 03:58:59.412383 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.412389 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.412396 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.412402 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.412409 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.412415 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.412422 | orchestrator | 2025-05-25 03:58:59.412429 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-05-25 03:58:59.412435 | orchestrator | Sunday 25 May 2025 03:49:13 +0000 (0:00:01.982) 0:01:17.033 ************ 2025-05-25 03:58:59.412442 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.412449 | orchestrator | 2025-05-25 03:58:59.412456 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-05-25 03:58:59.412462 | orchestrator | Sunday 25 May 2025 03:49:14 +0000 (0:00:01.096) 0:01:18.129 ************ 2025-05-25 03:58:59.412469 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.412476 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.412483 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.412489 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.412496 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.412502 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.412509 | orchestrator | 2025-05-25 03:58:59.412516 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-05-25 03:58:59.412522 | orchestrator | Sunday 25 May 2025 03:49:15 +0000 (0:00:00.725) 0:01:18.855 ************ 2025-05-25 03:58:59.412529 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.412535 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.412542 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.412549 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.412555 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.412562 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.412568 | orchestrator | 2025-05-25 03:58:59.412575 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-05-25 03:58:59.412582 | orchestrator | Sunday 25 May 2025 03:49:15 +0000 (0:00:00.526) 0:01:19.381 ************ 2025-05-25 03:58:59.412589 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-25 03:58:59.412595 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-25 03:58:59.412602 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-25 03:58:59.412608 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-25 03:58:59.412621 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-25 03:58:59.412628 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-25 03:58:59.412634 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-25 03:58:59.412641 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-25 03:58:59.412647 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-25 03:58:59.412654 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-25 03:58:59.412660 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-25 03:58:59.412667 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-25 03:58:59.412678 | orchestrator | 2025-05-25 03:58:59.412688 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-05-25 03:58:59.412695 | orchestrator | Sunday 25 May 2025 03:49:17 +0000 (0:00:01.484) 0:01:20.866 ************ 2025-05-25 03:58:59.412702 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.412709 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.412715 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.412722 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.412728 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.412735 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.412742 | orchestrator | 2025-05-25 03:58:59.412748 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-05-25 03:58:59.412755 | orchestrator | Sunday 25 May 2025 03:49:18 +0000 (0:00:00.827) 0:01:21.693 ************ 2025-05-25 03:58:59.412762 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.412768 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.412775 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.412781 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.412788 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.412794 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.412801 | orchestrator | 2025-05-25 03:58:59.412807 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-05-25 03:58:59.412814 | orchestrator | Sunday 25 May 2025 03:49:18 +0000 (0:00:00.770) 0:01:22.464 ************ 2025-05-25 03:58:59.412821 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.412827 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.412834 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.412841 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.412847 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.412854 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.412860 | orchestrator | 2025-05-25 03:58:59.412867 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-05-25 03:58:59.412874 | orchestrator | Sunday 25 May 2025 03:49:19 +0000 (0:00:00.540) 0:01:23.005 ************ 2025-05-25 03:58:59.412880 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.412887 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.412893 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.412900 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.412922 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.412928 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.412935 | orchestrator | 2025-05-25 03:58:59.412942 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-05-25 03:58:59.412948 | orchestrator | Sunday 25 May 2025 03:49:20 +0000 (0:00:00.738) 0:01:23.744 ************ 2025-05-25 03:58:59.412955 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.412962 | orchestrator | 2025-05-25 03:58:59.412969 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-05-25 03:58:59.412975 | orchestrator | Sunday 25 May 2025 03:49:21 +0000 (0:00:01.209) 0:01:24.954 ************ 2025-05-25 03:58:59.412982 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.412989 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.412995 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.413002 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.413008 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.413015 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.413022 | orchestrator | 2025-05-25 03:58:59.413028 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-05-25 03:58:59.413035 | orchestrator | Sunday 25 May 2025 03:50:33 +0000 (0:01:11.578) 0:02:36.532 ************ 2025-05-25 03:58:59.413041 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-25 03:58:59.413053 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-25 03:58:59.413059 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-25 03:58:59.413066 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.413073 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-25 03:58:59.413079 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-25 03:58:59.413086 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-25 03:58:59.413093 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.413099 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-25 03:58:59.413106 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-25 03:58:59.413112 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-25 03:58:59.413119 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.413126 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-25 03:58:59.413136 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-25 03:58:59.413143 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-25 03:58:59.413149 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.413156 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-25 03:58:59.413163 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-25 03:58:59.413169 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-25 03:58:59.413176 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.413183 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-25 03:58:59.413189 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-25 03:58:59.413196 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-25 03:58:59.413333 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.413350 | orchestrator | 2025-05-25 03:58:59.413361 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-05-25 03:58:59.413372 | orchestrator | Sunday 25 May 2025 03:50:34 +0000 (0:00:01.038) 0:02:37.570 ************ 2025-05-25 03:58:59.413382 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.413393 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.413403 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.413414 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.413422 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.413428 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.413435 | orchestrator | 2025-05-25 03:58:59.413442 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-05-25 03:58:59.413448 | orchestrator | Sunday 25 May 2025 03:50:34 +0000 (0:00:00.669) 0:02:38.239 ************ 2025-05-25 03:58:59.413455 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.413461 | orchestrator | 2025-05-25 03:58:59.413468 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-05-25 03:58:59.413474 | orchestrator | Sunday 25 May 2025 03:50:34 +0000 (0:00:00.184) 0:02:38.424 ************ 2025-05-25 03:58:59.413481 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.413487 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.413494 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.413500 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.413506 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.413513 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.413519 | orchestrator | 2025-05-25 03:58:59.413526 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-05-25 03:58:59.413540 | orchestrator | Sunday 25 May 2025 03:50:35 +0000 (0:00:01.087) 0:02:39.511 ************ 2025-05-25 03:58:59.413546 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.413553 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.413559 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.413566 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.413572 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.413579 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.413585 | orchestrator | 2025-05-25 03:58:59.413592 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-05-25 03:58:59.413599 | orchestrator | Sunday 25 May 2025 03:50:36 +0000 (0:00:00.759) 0:02:40.271 ************ 2025-05-25 03:58:59.413605 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.413612 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.413618 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.413624 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.413631 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.413637 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.413644 | orchestrator | 2025-05-25 03:58:59.413650 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-05-25 03:58:59.413657 | orchestrator | Sunday 25 May 2025 03:50:37 +0000 (0:00:01.072) 0:02:41.343 ************ 2025-05-25 03:58:59.413663 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.413670 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.413676 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.413683 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.413689 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.413696 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.413702 | orchestrator | 2025-05-25 03:58:59.413709 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-05-25 03:58:59.413715 | orchestrator | Sunday 25 May 2025 03:50:40 +0000 (0:00:02.259) 0:02:43.603 ************ 2025-05-25 03:58:59.413722 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.413728 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.413735 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.413741 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.413748 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.413754 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.413760 | orchestrator | 2025-05-25 03:58:59.413767 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-05-25 03:58:59.413773 | orchestrator | Sunday 25 May 2025 03:50:40 +0000 (0:00:00.779) 0:02:44.382 ************ 2025-05-25 03:58:59.413780 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.413788 | orchestrator | 2025-05-25 03:58:59.413795 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-05-25 03:58:59.413801 | orchestrator | Sunday 25 May 2025 03:50:41 +0000 (0:00:01.103) 0:02:45.486 ************ 2025-05-25 03:58:59.413808 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.413814 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.413821 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.413828 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.413834 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.413841 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.413847 | orchestrator | 2025-05-25 03:58:59.413854 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-05-25 03:58:59.413860 | orchestrator | Sunday 25 May 2025 03:50:42 +0000 (0:00:00.689) 0:02:46.175 ************ 2025-05-25 03:58:59.413871 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.413878 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.413884 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.413891 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.413897 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.413947 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.413956 | orchestrator | 2025-05-25 03:58:59.413964 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-05-25 03:58:59.413972 | orchestrator | Sunday 25 May 2025 03:50:43 +0000 (0:00:00.644) 0:02:46.820 ************ 2025-05-25 03:58:59.413979 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.413987 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.413994 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.414001 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.414009 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.414052 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.414059 | orchestrator | 2025-05-25 03:58:59.414066 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-05-25 03:58:59.414097 | orchestrator | Sunday 25 May 2025 03:50:43 +0000 (0:00:00.574) 0:02:47.395 ************ 2025-05-25 03:58:59.414106 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.414113 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.414120 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.414127 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.414133 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.414140 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.414147 | orchestrator | 2025-05-25 03:58:59.414154 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-05-25 03:58:59.414161 | orchestrator | Sunday 25 May 2025 03:50:44 +0000 (0:00:00.740) 0:02:48.135 ************ 2025-05-25 03:58:59.414168 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.414175 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.414182 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.414189 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.414196 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.414203 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.414209 | orchestrator | 2025-05-25 03:58:59.414216 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-05-25 03:58:59.414224 | orchestrator | Sunday 25 May 2025 03:50:45 +0000 (0:00:00.600) 0:02:48.736 ************ 2025-05-25 03:58:59.414231 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.414237 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.414244 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.414251 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.414258 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.414265 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.414272 | orchestrator | 2025-05-25 03:58:59.414279 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-05-25 03:58:59.414286 | orchestrator | Sunday 25 May 2025 03:50:45 +0000 (0:00:00.691) 0:02:49.428 ************ 2025-05-25 03:58:59.414292 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.414298 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.414304 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.414310 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.414316 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.414322 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.414328 | orchestrator | 2025-05-25 03:58:59.414334 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-05-25 03:58:59.414341 | orchestrator | Sunday 25 May 2025 03:50:46 +0000 (0:00:00.621) 0:02:50.050 ************ 2025-05-25 03:58:59.414347 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.414353 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.414359 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.414365 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.414371 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.414377 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.414383 | orchestrator | 2025-05-25 03:58:59.414389 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-05-25 03:58:59.414400 | orchestrator | Sunday 25 May 2025 03:50:47 +0000 (0:00:00.742) 0:02:50.792 ************ 2025-05-25 03:58:59.414406 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.414412 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.414418 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.414424 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.414430 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.414436 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.414442 | orchestrator | 2025-05-25 03:58:59.414448 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-05-25 03:58:59.414455 | orchestrator | Sunday 25 May 2025 03:50:48 +0000 (0:00:00.962) 0:02:51.755 ************ 2025-05-25 03:58:59.414461 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.414468 | orchestrator | 2025-05-25 03:58:59.414474 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-05-25 03:58:59.414481 | orchestrator | Sunday 25 May 2025 03:50:49 +0000 (0:00:00.908) 0:02:52.663 ************ 2025-05-25 03:58:59.414487 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-25 03:58:59.414493 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-25 03:58:59.414499 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-25 03:58:59.414505 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-25 03:58:59.414511 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-25 03:58:59.414517 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-25 03:58:59.414524 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-25 03:58:59.414530 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-25 03:58:59.414536 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-25 03:58:59.414542 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-25 03:58:59.414548 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-25 03:58:59.414558 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-25 03:58:59.414564 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-25 03:58:59.414570 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-25 03:58:59.414576 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-25 03:58:59.414582 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-25 03:58:59.414588 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-25 03:58:59.414595 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-25 03:58:59.414601 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-25 03:58:59.414607 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-25 03:58:59.414613 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-25 03:58:59.414637 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-25 03:58:59.414644 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-25 03:58:59.414650 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-25 03:58:59.414656 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-25 03:58:59.414662 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-25 03:58:59.414668 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-25 03:58:59.414674 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-25 03:58:59.414681 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-25 03:58:59.414687 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-25 03:58:59.414693 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-25 03:58:59.414699 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-25 03:58:59.414712 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-25 03:58:59.414718 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-25 03:58:59.414724 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-05-25 03:58:59.414730 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-25 03:58:59.414736 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-05-25 03:58:59.414743 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-25 03:58:59.414749 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-05-25 03:58:59.414755 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-05-25 03:58:59.414761 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-25 03:58:59.414767 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-05-25 03:58:59.414773 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-25 03:58:59.414779 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-05-25 03:58:59.414785 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-25 03:58:59.414791 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-25 03:58:59.414797 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-25 03:58:59.414804 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-25 03:58:59.414810 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-25 03:58:59.414816 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-25 03:58:59.414822 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-25 03:58:59.414828 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-25 03:58:59.414834 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-25 03:58:59.414840 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-25 03:58:59.414846 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-25 03:58:59.414852 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-25 03:58:59.414858 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-25 03:58:59.414864 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-25 03:58:59.414870 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-25 03:58:59.414876 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-25 03:58:59.414882 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-25 03:58:59.414888 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-25 03:58:59.414895 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-25 03:58:59.414901 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-25 03:58:59.414920 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-25 03:58:59.414927 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-25 03:58:59.414933 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-25 03:58:59.414939 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-25 03:58:59.414945 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-25 03:58:59.414951 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-25 03:58:59.414961 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-25 03:58:59.414967 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-25 03:58:59.414977 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-25 03:58:59.414983 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-25 03:58:59.414989 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-25 03:58:59.414995 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-25 03:58:59.415001 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-25 03:58:59.415007 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-25 03:58:59.415014 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-25 03:58:59.415038 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-25 03:58:59.415046 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-25 03:58:59.415052 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-25 03:58:59.415058 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-25 03:58:59.415065 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-25 03:58:59.415071 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-25 03:58:59.415077 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-25 03:58:59.415083 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-25 03:58:59.415089 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-25 03:58:59.415095 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-25 03:58:59.415101 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-25 03:58:59.415108 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-25 03:58:59.415114 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-25 03:58:59.415120 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-25 03:58:59.415126 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-25 03:58:59.415132 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-25 03:58:59.415138 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-25 03:58:59.415144 | orchestrator | 2025-05-25 03:58:59.415150 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-25 03:58:59.415156 | orchestrator | Sunday 25 May 2025 03:50:55 +0000 (0:00:06.055) 0:02:58.719 ************ 2025-05-25 03:58:59.415163 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.415169 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.415175 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.415181 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.415188 | orchestrator | 2025-05-25 03:58:59.415194 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-05-25 03:58:59.415200 | orchestrator | Sunday 25 May 2025 03:50:56 +0000 (0:00:00.925) 0:02:59.645 ************ 2025-05-25 03:58:59.415206 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-25 03:58:59.415212 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-25 03:58:59.415219 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-25 03:58:59.415225 | orchestrator | 2025-05-25 03:58:59.415231 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-05-25 03:58:59.415237 | orchestrator | Sunday 25 May 2025 03:50:56 +0000 (0:00:00.722) 0:03:00.367 ************ 2025-05-25 03:58:59.415243 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-25 03:58:59.415254 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-25 03:58:59.415260 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-25 03:58:59.415267 | orchestrator | 2025-05-25 03:58:59.415273 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-05-25 03:58:59.415279 | orchestrator | Sunday 25 May 2025 03:50:58 +0000 (0:00:01.491) 0:03:01.859 ************ 2025-05-25 03:58:59.415285 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.415291 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.415297 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.415303 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.415310 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.415316 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.415322 | orchestrator | 2025-05-25 03:58:59.415328 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-05-25 03:58:59.415334 | orchestrator | Sunday 25 May 2025 03:50:58 +0000 (0:00:00.518) 0:03:02.377 ************ 2025-05-25 03:58:59.415340 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.415346 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.415352 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.415359 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.415365 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.415374 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.415380 | orchestrator | 2025-05-25 03:58:59.415386 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-05-25 03:58:59.415392 | orchestrator | Sunday 25 May 2025 03:50:59 +0000 (0:00:00.631) 0:03:03.008 ************ 2025-05-25 03:58:59.415399 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.415405 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.415411 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.415417 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.415423 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.415429 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.415435 | orchestrator | 2025-05-25 03:58:59.415441 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-05-25 03:58:59.415447 | orchestrator | Sunday 25 May 2025 03:51:00 +0000 (0:00:00.807) 0:03:03.816 ************ 2025-05-25 03:58:59.415453 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.415459 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.415482 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.415490 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.415496 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.415502 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.415508 | orchestrator | 2025-05-25 03:58:59.415514 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-05-25 03:58:59.415520 | orchestrator | Sunday 25 May 2025 03:51:01 +0000 (0:00:00.835) 0:03:04.651 ************ 2025-05-25 03:58:59.415526 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.415532 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.415539 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.415545 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.415551 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.415556 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.415563 | orchestrator | 2025-05-25 03:58:59.415569 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-25 03:58:59.415575 | orchestrator | Sunday 25 May 2025 03:51:01 +0000 (0:00:00.669) 0:03:05.321 ************ 2025-05-25 03:58:59.415581 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.415587 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.415594 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.415604 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.415610 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.415616 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.415622 | orchestrator | 2025-05-25 03:58:59.415629 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-25 03:58:59.415635 | orchestrator | Sunday 25 May 2025 03:51:02 +0000 (0:00:00.763) 0:03:06.084 ************ 2025-05-25 03:58:59.415641 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.415647 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.415653 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.415659 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.415665 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.415671 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.415677 | orchestrator | 2025-05-25 03:58:59.415683 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-25 03:58:59.415690 | orchestrator | Sunday 25 May 2025 03:51:03 +0000 (0:00:00.507) 0:03:06.592 ************ 2025-05-25 03:58:59.415696 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.415702 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.415708 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.415714 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.415720 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.415726 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.415732 | orchestrator | 2025-05-25 03:58:59.415738 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-25 03:58:59.415744 | orchestrator | Sunday 25 May 2025 03:51:03 +0000 (0:00:00.646) 0:03:07.239 ************ 2025-05-25 03:58:59.415751 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.415757 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.415763 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.415769 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.415775 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.415781 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.415787 | orchestrator | 2025-05-25 03:58:59.415793 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-05-25 03:58:59.415799 | orchestrator | Sunday 25 May 2025 03:51:07 +0000 (0:00:03.882) 0:03:11.121 ************ 2025-05-25 03:58:59.415805 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.415811 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.415817 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.415824 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.415830 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.415836 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.415842 | orchestrator | 2025-05-25 03:58:59.415848 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-05-25 03:58:59.415854 | orchestrator | Sunday 25 May 2025 03:51:08 +0000 (0:00:00.874) 0:03:11.995 ************ 2025-05-25 03:58:59.415860 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.415866 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.415872 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.415878 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.415884 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.415890 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.415896 | orchestrator | 2025-05-25 03:58:59.415916 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-05-25 03:58:59.415923 | orchestrator | Sunday 25 May 2025 03:51:09 +0000 (0:00:00.670) 0:03:12.666 ************ 2025-05-25 03:58:59.415929 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.415935 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.415941 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.415947 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.415953 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.415959 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.415973 | orchestrator | 2025-05-25 03:58:59.415979 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-05-25 03:58:59.415989 | orchestrator | Sunday 25 May 2025 03:51:10 +0000 (0:00:00.874) 0:03:13.541 ************ 2025-05-25 03:58:59.415996 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.416002 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.416008 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.416014 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-25 03:58:59.416020 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-25 03:58:59.416027 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-25 03:58:59.416033 | orchestrator | 2025-05-25 03:58:59.416039 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-05-25 03:58:59.416064 | orchestrator | Sunday 25 May 2025 03:51:10 +0000 (0:00:00.819) 0:03:14.360 ************ 2025-05-25 03:58:59.416072 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.416078 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.416084 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.416091 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-05-25 03:58:59.416099 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-05-25 03:58:59.416106 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-05-25 03:58:59.416112 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-05-25 03:58:59.416118 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.416125 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.416131 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-05-25 03:58:59.416137 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-05-25 03:58:59.416143 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.416150 | orchestrator | 2025-05-25 03:58:59.416156 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-05-25 03:58:59.416162 | orchestrator | Sunday 25 May 2025 03:51:11 +0000 (0:00:00.941) 0:03:15.302 ************ 2025-05-25 03:58:59.416168 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.416174 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.416185 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.416191 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.416197 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.416203 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.416209 | orchestrator | 2025-05-25 03:58:59.416215 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-05-25 03:58:59.416221 | orchestrator | Sunday 25 May 2025 03:51:12 +0000 (0:00:00.690) 0:03:15.993 ************ 2025-05-25 03:58:59.416227 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.416233 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.416239 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.416245 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.416251 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.416257 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.416263 | orchestrator | 2025-05-25 03:58:59.416270 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-25 03:58:59.416276 | orchestrator | Sunday 25 May 2025 03:51:13 +0000 (0:00:00.850) 0:03:16.844 ************ 2025-05-25 03:58:59.416282 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.416288 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.416294 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.416300 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.416306 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.416312 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.416318 | orchestrator | 2025-05-25 03:58:59.416328 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-25 03:58:59.416334 | orchestrator | Sunday 25 May 2025 03:51:14 +0000 (0:00:00.709) 0:03:17.553 ************ 2025-05-25 03:58:59.416340 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.416346 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.416352 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.416358 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.416365 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.416371 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.416377 | orchestrator | 2025-05-25 03:58:59.416383 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-25 03:58:59.416389 | orchestrator | Sunday 25 May 2025 03:51:15 +0000 (0:00:00.981) 0:03:18.534 ************ 2025-05-25 03:58:59.416395 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.416402 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.416408 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.416430 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.416438 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.416444 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.416450 | orchestrator | 2025-05-25 03:58:59.416456 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-25 03:58:59.416462 | orchestrator | Sunday 25 May 2025 03:51:15 +0000 (0:00:00.739) 0:03:19.274 ************ 2025-05-25 03:58:59.416468 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.416474 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.416480 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.416487 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.416493 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.416499 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.416505 | orchestrator | 2025-05-25 03:58:59.416512 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-25 03:58:59.416518 | orchestrator | Sunday 25 May 2025 03:51:16 +0000 (0:00:01.205) 0:03:20.480 ************ 2025-05-25 03:58:59.416524 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 03:58:59.416530 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 03:58:59.416536 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 03:58:59.416549 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.416555 | orchestrator | 2025-05-25 03:58:59.416561 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-25 03:58:59.416567 | orchestrator | Sunday 25 May 2025 03:51:17 +0000 (0:00:00.317) 0:03:20.798 ************ 2025-05-25 03:58:59.416573 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 03:58:59.416579 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 03:58:59.416586 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 03:58:59.416592 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.416598 | orchestrator | 2025-05-25 03:58:59.416604 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-25 03:58:59.416610 | orchestrator | Sunday 25 May 2025 03:51:17 +0000 (0:00:00.359) 0:03:21.158 ************ 2025-05-25 03:58:59.416616 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 03:58:59.416623 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 03:58:59.416629 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 03:58:59.416635 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.416641 | orchestrator | 2025-05-25 03:58:59.416647 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-25 03:58:59.416653 | orchestrator | Sunday 25 May 2025 03:51:17 +0000 (0:00:00.345) 0:03:21.504 ************ 2025-05-25 03:58:59.416659 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.416665 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.416671 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.416677 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.416683 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.416689 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.416695 | orchestrator | 2025-05-25 03:58:59.416702 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-25 03:58:59.416708 | orchestrator | Sunday 25 May 2025 03:51:18 +0000 (0:00:00.549) 0:03:22.054 ************ 2025-05-25 03:58:59.416714 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-25 03:58:59.416720 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.416726 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-25 03:58:59.416732 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.416738 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-25 03:58:59.416744 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.416750 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-25 03:58:59.416756 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-25 03:58:59.416762 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-25 03:58:59.416768 | orchestrator | 2025-05-25 03:58:59.416774 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-05-25 03:58:59.416781 | orchestrator | Sunday 25 May 2025 03:51:20 +0000 (0:00:01.534) 0:03:23.588 ************ 2025-05-25 03:58:59.416787 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.416793 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.416799 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.416805 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.416811 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.416817 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.416823 | orchestrator | 2025-05-25 03:58:59.416829 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-25 03:58:59.416835 | orchestrator | Sunday 25 May 2025 03:51:22 +0000 (0:00:02.329) 0:03:25.918 ************ 2025-05-25 03:58:59.416841 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.416847 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.416853 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.416859 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.416865 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.416871 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.416882 | orchestrator | 2025-05-25 03:58:59.416888 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-05-25 03:58:59.416898 | orchestrator | Sunday 25 May 2025 03:51:23 +0000 (0:00:01.019) 0:03:26.938 ************ 2025-05-25 03:58:59.416919 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.416925 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.416931 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.416937 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:58:59.416943 | orchestrator | 2025-05-25 03:58:59.416950 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-05-25 03:58:59.416956 | orchestrator | Sunday 25 May 2025 03:51:24 +0000 (0:00:01.126) 0:03:28.064 ************ 2025-05-25 03:58:59.416962 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.416968 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.416974 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.416980 | orchestrator | 2025-05-25 03:58:59.416987 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-05-25 03:58:59.417011 | orchestrator | Sunday 25 May 2025 03:51:24 +0000 (0:00:00.332) 0:03:28.396 ************ 2025-05-25 03:58:59.417018 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.417024 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.417030 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.417036 | orchestrator | 2025-05-25 03:58:59.417042 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-05-25 03:58:59.417048 | orchestrator | Sunday 25 May 2025 03:51:26 +0000 (0:00:01.707) 0:03:30.104 ************ 2025-05-25 03:58:59.417055 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 03:58:59.417061 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 03:58:59.417067 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 03:58:59.417073 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.417079 | orchestrator | 2025-05-25 03:58:59.417085 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-05-25 03:58:59.417091 | orchestrator | Sunday 25 May 2025 03:51:27 +0000 (0:00:00.655) 0:03:30.759 ************ 2025-05-25 03:58:59.417097 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.417103 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.417110 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.417116 | orchestrator | 2025-05-25 03:58:59.417122 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-05-25 03:58:59.417128 | orchestrator | Sunday 25 May 2025 03:51:27 +0000 (0:00:00.376) 0:03:31.136 ************ 2025-05-25 03:58:59.417134 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.417140 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.417146 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.417152 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.417158 | orchestrator | 2025-05-25 03:58:59.417164 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-05-25 03:58:59.417171 | orchestrator | Sunday 25 May 2025 03:51:28 +0000 (0:00:00.998) 0:03:32.135 ************ 2025-05-25 03:58:59.417177 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 03:58:59.417183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 03:58:59.417189 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 03:58:59.417195 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.417201 | orchestrator | 2025-05-25 03:58:59.417207 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-05-25 03:58:59.417213 | orchestrator | Sunday 25 May 2025 03:51:29 +0000 (0:00:00.413) 0:03:32.548 ************ 2025-05-25 03:58:59.417220 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.417226 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.417236 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.417242 | orchestrator | 2025-05-25 03:58:59.417249 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-05-25 03:58:59.417255 | orchestrator | Sunday 25 May 2025 03:51:29 +0000 (0:00:00.315) 0:03:32.864 ************ 2025-05-25 03:58:59.417261 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.417267 | orchestrator | 2025-05-25 03:58:59.417273 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-05-25 03:58:59.417279 | orchestrator | Sunday 25 May 2025 03:51:29 +0000 (0:00:00.206) 0:03:33.071 ************ 2025-05-25 03:58:59.417286 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.417292 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.417298 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.417304 | orchestrator | 2025-05-25 03:58:59.417310 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-05-25 03:58:59.417316 | orchestrator | Sunday 25 May 2025 03:51:29 +0000 (0:00:00.353) 0:03:33.425 ************ 2025-05-25 03:58:59.417322 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.417328 | orchestrator | 2025-05-25 03:58:59.417334 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-05-25 03:58:59.417340 | orchestrator | Sunday 25 May 2025 03:51:30 +0000 (0:00:00.213) 0:03:33.638 ************ 2025-05-25 03:58:59.417346 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.417353 | orchestrator | 2025-05-25 03:58:59.417359 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-05-25 03:58:59.417365 | orchestrator | Sunday 25 May 2025 03:51:30 +0000 (0:00:00.233) 0:03:33.872 ************ 2025-05-25 03:58:59.417371 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.417377 | orchestrator | 2025-05-25 03:58:59.417383 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-05-25 03:58:59.417389 | orchestrator | Sunday 25 May 2025 03:51:30 +0000 (0:00:00.326) 0:03:34.199 ************ 2025-05-25 03:58:59.417395 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.417402 | orchestrator | 2025-05-25 03:58:59.417408 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-05-25 03:58:59.417414 | orchestrator | Sunday 25 May 2025 03:51:30 +0000 (0:00:00.231) 0:03:34.431 ************ 2025-05-25 03:58:59.417420 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.417426 | orchestrator | 2025-05-25 03:58:59.417435 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-05-25 03:58:59.417441 | orchestrator | Sunday 25 May 2025 03:51:31 +0000 (0:00:00.223) 0:03:34.654 ************ 2025-05-25 03:58:59.417447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 03:58:59.417454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 03:58:59.417460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 03:58:59.417466 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.417472 | orchestrator | 2025-05-25 03:58:59.417478 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-05-25 03:58:59.417484 | orchestrator | Sunday 25 May 2025 03:51:31 +0000 (0:00:00.404) 0:03:35.059 ************ 2025-05-25 03:58:59.417490 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.417496 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.417502 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.417509 | orchestrator | 2025-05-25 03:58:59.417532 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-05-25 03:58:59.417539 | orchestrator | Sunday 25 May 2025 03:51:31 +0000 (0:00:00.330) 0:03:35.390 ************ 2025-05-25 03:58:59.417545 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.417552 | orchestrator | 2025-05-25 03:58:59.417558 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-05-25 03:58:59.417564 | orchestrator | Sunday 25 May 2025 03:51:32 +0000 (0:00:00.221) 0:03:35.611 ************ 2025-05-25 03:58:59.417570 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.417580 | orchestrator | 2025-05-25 03:58:59.417587 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-05-25 03:58:59.417593 | orchestrator | Sunday 25 May 2025 03:51:32 +0000 (0:00:00.210) 0:03:35.822 ************ 2025-05-25 03:58:59.417599 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.417605 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.417611 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.417617 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.417624 | orchestrator | 2025-05-25 03:58:59.417630 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-05-25 03:58:59.417636 | orchestrator | Sunday 25 May 2025 03:51:33 +0000 (0:00:01.096) 0:03:36.919 ************ 2025-05-25 03:58:59.417642 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.417648 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.417654 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.417660 | orchestrator | 2025-05-25 03:58:59.417666 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-05-25 03:58:59.417672 | orchestrator | Sunday 25 May 2025 03:51:33 +0000 (0:00:00.308) 0:03:37.228 ************ 2025-05-25 03:58:59.417678 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.417684 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.417690 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.417696 | orchestrator | 2025-05-25 03:58:59.417702 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-05-25 03:58:59.417709 | orchestrator | Sunday 25 May 2025 03:51:34 +0000 (0:00:01.173) 0:03:38.401 ************ 2025-05-25 03:58:59.417715 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 03:58:59.417721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 03:58:59.417727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 03:58:59.417733 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.417739 | orchestrator | 2025-05-25 03:58:59.417745 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-05-25 03:58:59.417751 | orchestrator | Sunday 25 May 2025 03:51:35 +0000 (0:00:01.114) 0:03:39.515 ************ 2025-05-25 03:58:59.417757 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.417763 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.417770 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.417776 | orchestrator | 2025-05-25 03:58:59.417782 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-05-25 03:58:59.417788 | orchestrator | Sunday 25 May 2025 03:51:36 +0000 (0:00:00.350) 0:03:39.865 ************ 2025-05-25 03:58:59.417794 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.417800 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.417806 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.417812 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.417819 | orchestrator | 2025-05-25 03:58:59.417825 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-05-25 03:58:59.417831 | orchestrator | Sunday 25 May 2025 03:51:37 +0000 (0:00:01.038) 0:03:40.903 ************ 2025-05-25 03:58:59.417837 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.417843 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.417849 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.417855 | orchestrator | 2025-05-25 03:58:59.417861 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-05-25 03:58:59.417868 | orchestrator | Sunday 25 May 2025 03:51:37 +0000 (0:00:00.440) 0:03:41.344 ************ 2025-05-25 03:58:59.417874 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.417880 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.417886 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.417892 | orchestrator | 2025-05-25 03:58:59.417898 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-05-25 03:58:59.417939 | orchestrator | Sunday 25 May 2025 03:51:39 +0000 (0:00:01.338) 0:03:42.682 ************ 2025-05-25 03:58:59.417946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 03:58:59.417952 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 03:58:59.417958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 03:58:59.417965 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.417971 | orchestrator | 2025-05-25 03:58:59.417977 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-05-25 03:58:59.417986 | orchestrator | Sunday 25 May 2025 03:51:39 +0000 (0:00:00.782) 0:03:43.465 ************ 2025-05-25 03:58:59.417993 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.417999 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.418005 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.418011 | orchestrator | 2025-05-25 03:58:59.418036 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-05-25 03:58:59.418042 | orchestrator | Sunday 25 May 2025 03:51:40 +0000 (0:00:00.338) 0:03:43.803 ************ 2025-05-25 03:58:59.418049 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.418055 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.418061 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.418067 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.418073 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.418079 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.418085 | orchestrator | 2025-05-25 03:58:59.418091 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-05-25 03:58:59.418098 | orchestrator | Sunday 25 May 2025 03:51:41 +0000 (0:00:00.808) 0:03:44.611 ************ 2025-05-25 03:58:59.418124 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.418132 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.418138 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.418144 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:58:59.418150 | orchestrator | 2025-05-25 03:58:59.418156 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-05-25 03:58:59.418161 | orchestrator | Sunday 25 May 2025 03:51:42 +0000 (0:00:01.018) 0:03:45.630 ************ 2025-05-25 03:58:59.418166 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.418172 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.418177 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.418182 | orchestrator | 2025-05-25 03:58:59.418188 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-05-25 03:58:59.418193 | orchestrator | Sunday 25 May 2025 03:51:42 +0000 (0:00:00.318) 0:03:45.949 ************ 2025-05-25 03:58:59.418199 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.418204 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.418209 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.418214 | orchestrator | 2025-05-25 03:58:59.418220 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-05-25 03:58:59.418225 | orchestrator | Sunday 25 May 2025 03:51:43 +0000 (0:00:01.185) 0:03:47.134 ************ 2025-05-25 03:58:59.418231 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 03:58:59.418236 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 03:58:59.418241 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 03:58:59.418247 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.418252 | orchestrator | 2025-05-25 03:58:59.418257 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-05-25 03:58:59.418263 | orchestrator | Sunday 25 May 2025 03:51:44 +0000 (0:00:00.813) 0:03:47.948 ************ 2025-05-25 03:58:59.418268 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.418273 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.418279 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.418288 | orchestrator | 2025-05-25 03:58:59.418294 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-25 03:58:59.418299 | orchestrator | 2025-05-25 03:58:59.418304 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-25 03:58:59.418310 | orchestrator | Sunday 25 May 2025 03:51:45 +0000 (0:00:00.841) 0:03:48.789 ************ 2025-05-25 03:58:59.418315 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:58:59.418321 | orchestrator | 2025-05-25 03:58:59.418326 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-25 03:58:59.418331 | orchestrator | Sunday 25 May 2025 03:51:45 +0000 (0:00:00.523) 0:03:49.313 ************ 2025-05-25 03:58:59.418337 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:58:59.418342 | orchestrator | 2025-05-25 03:58:59.418348 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-25 03:58:59.418353 | orchestrator | Sunday 25 May 2025 03:51:46 +0000 (0:00:00.722) 0:03:50.036 ************ 2025-05-25 03:58:59.418358 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.418364 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.418369 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.418374 | orchestrator | 2025-05-25 03:58:59.418380 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-25 03:58:59.418385 | orchestrator | Sunday 25 May 2025 03:51:47 +0000 (0:00:00.681) 0:03:50.718 ************ 2025-05-25 03:58:59.418390 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.418396 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.418401 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.418406 | orchestrator | 2025-05-25 03:58:59.418412 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-25 03:58:59.418417 | orchestrator | Sunday 25 May 2025 03:51:47 +0000 (0:00:00.312) 0:03:51.030 ************ 2025-05-25 03:58:59.418422 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.418428 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.418433 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.418438 | orchestrator | 2025-05-25 03:58:59.418444 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-25 03:58:59.418449 | orchestrator | Sunday 25 May 2025 03:51:47 +0000 (0:00:00.291) 0:03:51.321 ************ 2025-05-25 03:58:59.418454 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.418460 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.418465 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.418470 | orchestrator | 2025-05-25 03:58:59.418476 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-25 03:58:59.418481 | orchestrator | Sunday 25 May 2025 03:51:48 +0000 (0:00:00.538) 0:03:51.860 ************ 2025-05-25 03:58:59.418486 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.418492 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.418500 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.418506 | orchestrator | 2025-05-25 03:58:59.418511 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-25 03:58:59.418516 | orchestrator | Sunday 25 May 2025 03:51:49 +0000 (0:00:00.697) 0:03:52.557 ************ 2025-05-25 03:58:59.418522 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.418527 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.418533 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.418538 | orchestrator | 2025-05-25 03:58:59.418543 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-25 03:58:59.418549 | orchestrator | Sunday 25 May 2025 03:51:49 +0000 (0:00:00.292) 0:03:52.850 ************ 2025-05-25 03:58:59.418554 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.418559 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.418565 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.418573 | orchestrator | 2025-05-25 03:58:59.418579 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-25 03:58:59.418600 | orchestrator | Sunday 25 May 2025 03:51:49 +0000 (0:00:00.293) 0:03:53.144 ************ 2025-05-25 03:58:59.418606 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.418612 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.418617 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.418622 | orchestrator | 2025-05-25 03:58:59.418628 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-25 03:58:59.418633 | orchestrator | Sunday 25 May 2025 03:51:50 +0000 (0:00:00.867) 0:03:54.011 ************ 2025-05-25 03:58:59.418638 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.418644 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.418649 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.418654 | orchestrator | 2025-05-25 03:58:59.418660 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-25 03:58:59.418665 | orchestrator | Sunday 25 May 2025 03:51:51 +0000 (0:00:00.644) 0:03:54.656 ************ 2025-05-25 03:58:59.418670 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.418676 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.418681 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.418687 | orchestrator | 2025-05-25 03:58:59.418692 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-25 03:58:59.418697 | orchestrator | Sunday 25 May 2025 03:51:51 +0000 (0:00:00.258) 0:03:54.915 ************ 2025-05-25 03:58:59.418703 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.418708 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.418713 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.418718 | orchestrator | 2025-05-25 03:58:59.418724 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-25 03:58:59.418729 | orchestrator | Sunday 25 May 2025 03:51:51 +0000 (0:00:00.285) 0:03:55.200 ************ 2025-05-25 03:58:59.418734 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.418740 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.418745 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.418750 | orchestrator | 2025-05-25 03:58:59.418756 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-25 03:58:59.418761 | orchestrator | Sunday 25 May 2025 03:51:52 +0000 (0:00:00.452) 0:03:55.653 ************ 2025-05-25 03:58:59.418766 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.418772 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.418777 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.418782 | orchestrator | 2025-05-25 03:58:59.418788 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-25 03:58:59.418793 | orchestrator | Sunday 25 May 2025 03:51:52 +0000 (0:00:00.341) 0:03:55.994 ************ 2025-05-25 03:58:59.418798 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.418804 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.418809 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.418814 | orchestrator | 2025-05-25 03:58:59.418820 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-25 03:58:59.418825 | orchestrator | Sunday 25 May 2025 03:51:52 +0000 (0:00:00.376) 0:03:56.371 ************ 2025-05-25 03:58:59.418830 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.418836 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.418841 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.418846 | orchestrator | 2025-05-25 03:58:59.418852 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-25 03:58:59.418857 | orchestrator | Sunday 25 May 2025 03:51:53 +0000 (0:00:00.279) 0:03:56.650 ************ 2025-05-25 03:58:59.418863 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.418868 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.418873 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.418878 | orchestrator | 2025-05-25 03:58:59.418884 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-25 03:58:59.418895 | orchestrator | Sunday 25 May 2025 03:51:53 +0000 (0:00:00.444) 0:03:57.098 ************ 2025-05-25 03:58:59.418901 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.418915 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.418921 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.418926 | orchestrator | 2025-05-25 03:58:59.418932 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-25 03:58:59.418937 | orchestrator | Sunday 25 May 2025 03:51:53 +0000 (0:00:00.348) 0:03:57.446 ************ 2025-05-25 03:58:59.418942 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.418948 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.418953 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.418958 | orchestrator | 2025-05-25 03:58:59.418964 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-25 03:58:59.418969 | orchestrator | Sunday 25 May 2025 03:51:54 +0000 (0:00:00.324) 0:03:57.771 ************ 2025-05-25 03:58:59.418974 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.418980 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.418985 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.418991 | orchestrator | 2025-05-25 03:58:59.418996 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-05-25 03:58:59.419001 | orchestrator | Sunday 25 May 2025 03:51:54 +0000 (0:00:00.598) 0:03:58.369 ************ 2025-05-25 03:58:59.419006 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.419012 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.419017 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.419023 | orchestrator | 2025-05-25 03:58:59.419031 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-05-25 03:58:59.419037 | orchestrator | Sunday 25 May 2025 03:51:55 +0000 (0:00:00.287) 0:03:58.657 ************ 2025-05-25 03:58:59.419042 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:58:59.419047 | orchestrator | 2025-05-25 03:58:59.419053 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-05-25 03:58:59.419058 | orchestrator | Sunday 25 May 2025 03:51:55 +0000 (0:00:00.488) 0:03:59.145 ************ 2025-05-25 03:58:59.419063 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.419069 | orchestrator | 2025-05-25 03:58:59.419074 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-05-25 03:58:59.419080 | orchestrator | Sunday 25 May 2025 03:51:55 +0000 (0:00:00.125) 0:03:59.271 ************ 2025-05-25 03:58:59.419085 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-25 03:58:59.419090 | orchestrator | 2025-05-25 03:58:59.419112 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-05-25 03:58:59.419118 | orchestrator | Sunday 25 May 2025 03:51:57 +0000 (0:00:01.328) 0:04:00.599 ************ 2025-05-25 03:58:59.419123 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.419129 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.419134 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.419139 | orchestrator | 2025-05-25 03:58:59.419145 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-05-25 03:58:59.419150 | orchestrator | Sunday 25 May 2025 03:51:57 +0000 (0:00:00.280) 0:04:00.880 ************ 2025-05-25 03:58:59.419155 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.419161 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.419166 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.419171 | orchestrator | 2025-05-25 03:58:59.419177 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-05-25 03:58:59.419182 | orchestrator | Sunday 25 May 2025 03:51:57 +0000 (0:00:00.297) 0:04:01.178 ************ 2025-05-25 03:58:59.419187 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.419193 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.419198 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.419203 | orchestrator | 2025-05-25 03:58:59.419209 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-05-25 03:58:59.419218 | orchestrator | Sunday 25 May 2025 03:51:59 +0000 (0:00:01.369) 0:04:02.547 ************ 2025-05-25 03:58:59.419223 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.419229 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.419234 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.419239 | orchestrator | 2025-05-25 03:58:59.419245 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-05-25 03:58:59.419250 | orchestrator | Sunday 25 May 2025 03:51:59 +0000 (0:00:00.939) 0:04:03.487 ************ 2025-05-25 03:58:59.419255 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.419261 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.419266 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.419271 | orchestrator | 2025-05-25 03:58:59.419277 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-05-25 03:58:59.419282 | orchestrator | Sunday 25 May 2025 03:52:00 +0000 (0:00:00.672) 0:04:04.160 ************ 2025-05-25 03:58:59.419287 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.419293 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.419298 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.419303 | orchestrator | 2025-05-25 03:58:59.419309 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-05-25 03:58:59.419314 | orchestrator | Sunday 25 May 2025 03:52:01 +0000 (0:00:00.636) 0:04:04.796 ************ 2025-05-25 03:58:59.419319 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.419325 | orchestrator | 2025-05-25 03:58:59.419330 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-05-25 03:58:59.419335 | orchestrator | Sunday 25 May 2025 03:52:02 +0000 (0:00:01.207) 0:04:06.004 ************ 2025-05-25 03:58:59.419341 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.419346 | orchestrator | 2025-05-25 03:58:59.419351 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-05-25 03:58:59.419357 | orchestrator | Sunday 25 May 2025 03:52:03 +0000 (0:00:00.659) 0:04:06.663 ************ 2025-05-25 03:58:59.419362 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-25 03:58:59.419367 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 03:58:59.419373 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 03:58:59.419378 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-25 03:58:59.419383 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-05-25 03:58:59.419389 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-25 03:58:59.419394 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-25 03:58:59.419399 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-05-25 03:58:59.419405 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-25 03:58:59.419410 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-05-25 03:58:59.419416 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-05-25 03:58:59.419421 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-05-25 03:58:59.419426 | orchestrator | 2025-05-25 03:58:59.419434 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-05-25 03:58:59.419442 | orchestrator | Sunday 25 May 2025 03:52:06 +0000 (0:00:03.373) 0:04:10.037 ************ 2025-05-25 03:58:59.419451 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.419459 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.419467 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.419476 | orchestrator | 2025-05-25 03:58:59.419485 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-05-25 03:58:59.419492 | orchestrator | Sunday 25 May 2025 03:52:08 +0000 (0:00:01.602) 0:04:11.643 ************ 2025-05-25 03:58:59.419498 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.419503 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.419512 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.419521 | orchestrator | 2025-05-25 03:58:59.419527 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-05-25 03:58:59.419532 | orchestrator | Sunday 25 May 2025 03:52:08 +0000 (0:00:00.366) 0:04:12.009 ************ 2025-05-25 03:58:59.419537 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.419543 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.419548 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.419553 | orchestrator | 2025-05-25 03:58:59.419559 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-05-25 03:58:59.419564 | orchestrator | Sunday 25 May 2025 03:52:08 +0000 (0:00:00.374) 0:04:12.384 ************ 2025-05-25 03:58:59.419569 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.419575 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.419580 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.419585 | orchestrator | 2025-05-25 03:58:59.419591 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-05-25 03:58:59.419614 | orchestrator | Sunday 25 May 2025 03:52:10 +0000 (0:00:01.679) 0:04:14.064 ************ 2025-05-25 03:58:59.419620 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.419626 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.419631 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.419636 | orchestrator | 2025-05-25 03:58:59.419642 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-05-25 03:58:59.419647 | orchestrator | Sunday 25 May 2025 03:52:12 +0000 (0:00:01.520) 0:04:15.584 ************ 2025-05-25 03:58:59.419653 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.419660 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.419669 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.419678 | orchestrator | 2025-05-25 03:58:59.419688 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-05-25 03:58:59.419696 | orchestrator | Sunday 25 May 2025 03:52:12 +0000 (0:00:00.315) 0:04:15.899 ************ 2025-05-25 03:58:59.419705 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:58:59.419713 | orchestrator | 2025-05-25 03:58:59.419721 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-05-25 03:58:59.419729 | orchestrator | Sunday 25 May 2025 03:52:12 +0000 (0:00:00.586) 0:04:16.486 ************ 2025-05-25 03:58:59.419737 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.419745 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.419754 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.419762 | orchestrator | 2025-05-25 03:58:59.419771 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-05-25 03:58:59.419779 | orchestrator | Sunday 25 May 2025 03:52:13 +0000 (0:00:00.579) 0:04:17.065 ************ 2025-05-25 03:58:59.419787 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.419796 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.419804 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.419813 | orchestrator | 2025-05-25 03:58:59.419821 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-05-25 03:58:59.419830 | orchestrator | Sunday 25 May 2025 03:52:13 +0000 (0:00:00.362) 0:04:17.428 ************ 2025-05-25 03:58:59.419839 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:58:59.419848 | orchestrator | 2025-05-25 03:58:59.419856 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-05-25 03:58:59.419865 | orchestrator | Sunday 25 May 2025 03:52:14 +0000 (0:00:00.526) 0:04:17.955 ************ 2025-05-25 03:58:59.419874 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.419883 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.419892 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.419901 | orchestrator | 2025-05-25 03:58:59.419953 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-05-25 03:58:59.419962 | orchestrator | Sunday 25 May 2025 03:52:16 +0000 (0:00:01.818) 0:04:19.774 ************ 2025-05-25 03:58:59.419979 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.419985 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.419990 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.419995 | orchestrator | 2025-05-25 03:58:59.420001 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-05-25 03:58:59.420006 | orchestrator | Sunday 25 May 2025 03:52:17 +0000 (0:00:01.256) 0:04:21.030 ************ 2025-05-25 03:58:59.420012 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.420017 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.420022 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.420027 | orchestrator | 2025-05-25 03:58:59.420033 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-05-25 03:58:59.420038 | orchestrator | Sunday 25 May 2025 03:52:19 +0000 (0:00:01.743) 0:04:22.773 ************ 2025-05-25 03:58:59.420043 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.420049 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.420054 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.420059 | orchestrator | 2025-05-25 03:58:59.420065 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-05-25 03:58:59.420070 | orchestrator | Sunday 25 May 2025 03:52:21 +0000 (0:00:02.310) 0:04:25.084 ************ 2025-05-25 03:58:59.420075 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:58:59.420081 | orchestrator | 2025-05-25 03:58:59.420086 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-05-25 03:58:59.420091 | orchestrator | Sunday 25 May 2025 03:52:22 +0000 (0:00:00.817) 0:04:25.902 ************ 2025-05-25 03:58:59.420097 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-25 03:58:59.420102 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.420107 | orchestrator | 2025-05-25 03:58:59.420112 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-05-25 03:58:59.420117 | orchestrator | Sunday 25 May 2025 03:52:44 +0000 (0:00:21.773) 0:04:47.675 ************ 2025-05-25 03:58:59.420121 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.420130 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.420135 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.420140 | orchestrator | 2025-05-25 03:58:59.420145 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-05-25 03:58:59.420150 | orchestrator | Sunday 25 May 2025 03:52:54 +0000 (0:00:10.253) 0:04:57.929 ************ 2025-05-25 03:58:59.420154 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.420159 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.420164 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.420169 | orchestrator | 2025-05-25 03:58:59.420173 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-05-25 03:58:59.420178 | orchestrator | Sunday 25 May 2025 03:52:54 +0000 (0:00:00.382) 0:04:58.311 ************ 2025-05-25 03:58:59.420210 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b51e40b3c0239ea1f80c797f92590fca2b3ea0bd'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-05-25 03:58:59.420218 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b51e40b3c0239ea1f80c797f92590fca2b3ea0bd'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-05-25 03:58:59.420227 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b51e40b3c0239ea1f80c797f92590fca2b3ea0bd'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-05-25 03:58:59.420242 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b51e40b3c0239ea1f80c797f92590fca2b3ea0bd'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-05-25 03:58:59.420249 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b51e40b3c0239ea1f80c797f92590fca2b3ea0bd'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-05-25 03:58:59.420258 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b51e40b3c0239ea1f80c797f92590fca2b3ea0bd'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__b51e40b3c0239ea1f80c797f92590fca2b3ea0bd'}])  2025-05-25 03:58:59.420267 | orchestrator | 2025-05-25 03:58:59.420274 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-25 03:58:59.420282 | orchestrator | Sunday 25 May 2025 03:53:08 +0000 (0:00:13.967) 0:05:12.278 ************ 2025-05-25 03:58:59.420289 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.420297 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.420305 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.420313 | orchestrator | 2025-05-25 03:58:59.420320 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-05-25 03:58:59.420328 | orchestrator | Sunday 25 May 2025 03:53:09 +0000 (0:00:00.317) 0:05:12.596 ************ 2025-05-25 03:58:59.420335 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:58:59.420343 | orchestrator | 2025-05-25 03:58:59.420350 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-05-25 03:58:59.420357 | orchestrator | Sunday 25 May 2025 03:53:09 +0000 (0:00:00.814) 0:05:13.411 ************ 2025-05-25 03:58:59.420365 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.420373 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.420380 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.420387 | orchestrator | 2025-05-25 03:58:59.420395 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-05-25 03:58:59.420404 | orchestrator | Sunday 25 May 2025 03:53:10 +0000 (0:00:00.314) 0:05:13.725 ************ 2025-05-25 03:58:59.420412 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.420420 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.420429 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.420437 | orchestrator | 2025-05-25 03:58:59.420442 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-05-25 03:58:59.420451 | orchestrator | Sunday 25 May 2025 03:53:10 +0000 (0:00:00.323) 0:05:14.049 ************ 2025-05-25 03:58:59.420456 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 03:58:59.420461 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 03:58:59.420466 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 03:58:59.420471 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.420475 | orchestrator | 2025-05-25 03:58:59.420480 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-05-25 03:58:59.420485 | orchestrator | Sunday 25 May 2025 03:53:11 +0000 (0:00:00.857) 0:05:14.907 ************ 2025-05-25 03:58:59.420494 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.420499 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.420504 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.420508 | orchestrator | 2025-05-25 03:58:59.420513 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-25 03:58:59.420518 | orchestrator | 2025-05-25 03:58:59.420523 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-25 03:58:59.420548 | orchestrator | Sunday 25 May 2025 03:53:12 +0000 (0:00:00.992) 0:05:15.899 ************ 2025-05-25 03:58:59.420554 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:58:59.420559 | orchestrator | 2025-05-25 03:58:59.420564 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-25 03:58:59.420568 | orchestrator | Sunday 25 May 2025 03:53:12 +0000 (0:00:00.550) 0:05:16.450 ************ 2025-05-25 03:58:59.420573 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:58:59.420578 | orchestrator | 2025-05-25 03:58:59.420583 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-25 03:58:59.420587 | orchestrator | Sunday 25 May 2025 03:53:13 +0000 (0:00:00.868) 0:05:17.319 ************ 2025-05-25 03:58:59.420592 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.420597 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.420602 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.420606 | orchestrator | 2025-05-25 03:58:59.420611 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-25 03:58:59.420616 | orchestrator | Sunday 25 May 2025 03:53:14 +0000 (0:00:00.739) 0:05:18.059 ************ 2025-05-25 03:58:59.420620 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.420625 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.420630 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.420635 | orchestrator | 2025-05-25 03:58:59.420639 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-25 03:58:59.420644 | orchestrator | Sunday 25 May 2025 03:53:14 +0000 (0:00:00.305) 0:05:18.364 ************ 2025-05-25 03:58:59.420649 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.420654 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.420658 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.420663 | orchestrator | 2025-05-25 03:58:59.420668 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-25 03:58:59.420672 | orchestrator | Sunday 25 May 2025 03:53:15 +0000 (0:00:00.544) 0:05:18.909 ************ 2025-05-25 03:58:59.420677 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.420682 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.420687 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.420691 | orchestrator | 2025-05-25 03:58:59.420696 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-25 03:58:59.420701 | orchestrator | Sunday 25 May 2025 03:53:15 +0000 (0:00:00.353) 0:05:19.263 ************ 2025-05-25 03:58:59.420706 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.420710 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.420715 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.420720 | orchestrator | 2025-05-25 03:58:59.420725 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-25 03:58:59.420730 | orchestrator | Sunday 25 May 2025 03:53:16 +0000 (0:00:00.681) 0:05:19.945 ************ 2025-05-25 03:58:59.420734 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.420739 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.420744 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.420748 | orchestrator | 2025-05-25 03:58:59.420753 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-25 03:58:59.420758 | orchestrator | Sunday 25 May 2025 03:53:16 +0000 (0:00:00.333) 0:05:20.278 ************ 2025-05-25 03:58:59.420766 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.420771 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.420776 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.420781 | orchestrator | 2025-05-25 03:58:59.420785 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-25 03:58:59.420790 | orchestrator | Sunday 25 May 2025 03:53:17 +0000 (0:00:00.581) 0:05:20.859 ************ 2025-05-25 03:58:59.420795 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.420800 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.420804 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.420809 | orchestrator | 2025-05-25 03:58:59.420814 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-25 03:58:59.420819 | orchestrator | Sunday 25 May 2025 03:53:18 +0000 (0:00:00.773) 0:05:21.633 ************ 2025-05-25 03:58:59.420823 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.420828 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.420833 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.420837 | orchestrator | 2025-05-25 03:58:59.420842 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-25 03:58:59.420847 | orchestrator | Sunday 25 May 2025 03:53:18 +0000 (0:00:00.785) 0:05:22.418 ************ 2025-05-25 03:58:59.420852 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.420856 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.420861 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.420866 | orchestrator | 2025-05-25 03:58:59.420871 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-25 03:58:59.420875 | orchestrator | Sunday 25 May 2025 03:53:19 +0000 (0:00:00.326) 0:05:22.745 ************ 2025-05-25 03:58:59.420880 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.420888 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.420892 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.420897 | orchestrator | 2025-05-25 03:58:59.420916 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-25 03:58:59.420924 | orchestrator | Sunday 25 May 2025 03:53:19 +0000 (0:00:00.582) 0:05:23.327 ************ 2025-05-25 03:58:59.420933 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.420938 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.420943 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.420947 | orchestrator | 2025-05-25 03:58:59.420952 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-25 03:58:59.420957 | orchestrator | Sunday 25 May 2025 03:53:20 +0000 (0:00:00.305) 0:05:23.633 ************ 2025-05-25 03:58:59.420962 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.420966 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.420971 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.420976 | orchestrator | 2025-05-25 03:58:59.420980 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-25 03:58:59.421001 | orchestrator | Sunday 25 May 2025 03:53:20 +0000 (0:00:00.286) 0:05:23.920 ************ 2025-05-25 03:58:59.421006 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.421011 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.421016 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.421021 | orchestrator | 2025-05-25 03:58:59.421026 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-25 03:58:59.421030 | orchestrator | Sunday 25 May 2025 03:53:20 +0000 (0:00:00.311) 0:05:24.231 ************ 2025-05-25 03:58:59.421035 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.421040 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.421045 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.421049 | orchestrator | 2025-05-25 03:58:59.421054 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-25 03:58:59.421059 | orchestrator | Sunday 25 May 2025 03:53:21 +0000 (0:00:00.550) 0:05:24.782 ************ 2025-05-25 03:58:59.421064 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.421072 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.421077 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.421082 | orchestrator | 2025-05-25 03:58:59.421087 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-25 03:58:59.421091 | orchestrator | Sunday 25 May 2025 03:53:21 +0000 (0:00:00.294) 0:05:25.077 ************ 2025-05-25 03:58:59.421096 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.421101 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.421106 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.421110 | orchestrator | 2025-05-25 03:58:59.421115 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-25 03:58:59.421120 | orchestrator | Sunday 25 May 2025 03:53:21 +0000 (0:00:00.358) 0:05:25.436 ************ 2025-05-25 03:58:59.421125 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.421129 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.421134 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.421139 | orchestrator | 2025-05-25 03:58:59.421144 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-25 03:58:59.421148 | orchestrator | Sunday 25 May 2025 03:53:22 +0000 (0:00:00.337) 0:05:25.773 ************ 2025-05-25 03:58:59.421153 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.421158 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.421162 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.421169 | orchestrator | 2025-05-25 03:58:59.421177 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-05-25 03:58:59.421185 | orchestrator | Sunday 25 May 2025 03:53:23 +0000 (0:00:00.934) 0:05:26.708 ************ 2025-05-25 03:58:59.421193 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 03:58:59.421201 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 03:58:59.421210 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 03:58:59.421216 | orchestrator | 2025-05-25 03:58:59.421220 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-05-25 03:58:59.421225 | orchestrator | Sunday 25 May 2025 03:53:23 +0000 (0:00:00.627) 0:05:27.336 ************ 2025-05-25 03:58:59.421230 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:58:59.421235 | orchestrator | 2025-05-25 03:58:59.421240 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-05-25 03:58:59.421244 | orchestrator | Sunday 25 May 2025 03:53:24 +0000 (0:00:00.594) 0:05:27.930 ************ 2025-05-25 03:58:59.421249 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.421254 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.421259 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.421263 | orchestrator | 2025-05-25 03:58:59.421268 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-05-25 03:58:59.421273 | orchestrator | Sunday 25 May 2025 03:53:25 +0000 (0:00:01.042) 0:05:28.973 ************ 2025-05-25 03:58:59.421277 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.421282 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.421287 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.421291 | orchestrator | 2025-05-25 03:58:59.421296 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-05-25 03:58:59.421301 | orchestrator | Sunday 25 May 2025 03:53:25 +0000 (0:00:00.353) 0:05:29.327 ************ 2025-05-25 03:58:59.421306 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-25 03:58:59.421310 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-25 03:58:59.421315 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-25 03:58:59.421320 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-25 03:58:59.421325 | orchestrator | 2025-05-25 03:58:59.421329 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-05-25 03:58:59.421334 | orchestrator | Sunday 25 May 2025 03:53:36 +0000 (0:00:10.539) 0:05:39.866 ************ 2025-05-25 03:58:59.421343 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.421348 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.421352 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.421357 | orchestrator | 2025-05-25 03:58:59.421366 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-05-25 03:58:59.421371 | orchestrator | Sunday 25 May 2025 03:53:36 +0000 (0:00:00.342) 0:05:40.209 ************ 2025-05-25 03:58:59.421375 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-25 03:58:59.421380 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-25 03:58:59.421385 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-25 03:58:59.421390 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-25 03:58:59.421394 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 03:58:59.421399 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 03:58:59.421404 | orchestrator | 2025-05-25 03:58:59.421408 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-05-25 03:58:59.421413 | orchestrator | Sunday 25 May 2025 03:53:38 +0000 (0:00:02.311) 0:05:42.521 ************ 2025-05-25 03:58:59.421434 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-25 03:58:59.421439 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-25 03:58:59.421444 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-25 03:58:59.421449 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-25 03:58:59.421454 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-25 03:58:59.421458 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-25 03:58:59.421463 | orchestrator | 2025-05-25 03:58:59.421468 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-05-25 03:58:59.421473 | orchestrator | Sunday 25 May 2025 03:53:40 +0000 (0:00:01.420) 0:05:43.941 ************ 2025-05-25 03:58:59.421478 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.421482 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.421487 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.421492 | orchestrator | 2025-05-25 03:58:59.421497 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-05-25 03:58:59.421501 | orchestrator | Sunday 25 May 2025 03:53:41 +0000 (0:00:00.758) 0:05:44.699 ************ 2025-05-25 03:58:59.421506 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.421511 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.421516 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.421520 | orchestrator | 2025-05-25 03:58:59.421525 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-05-25 03:58:59.421530 | orchestrator | Sunday 25 May 2025 03:53:41 +0000 (0:00:00.290) 0:05:44.990 ************ 2025-05-25 03:58:59.421534 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.421539 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.421544 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.421549 | orchestrator | 2025-05-25 03:58:59.421554 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-05-25 03:58:59.421558 | orchestrator | Sunday 25 May 2025 03:53:41 +0000 (0:00:00.266) 0:05:45.256 ************ 2025-05-25 03:58:59.421563 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:58:59.421568 | orchestrator | 2025-05-25 03:58:59.421573 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-05-25 03:58:59.421578 | orchestrator | Sunday 25 May 2025 03:53:42 +0000 (0:00:00.806) 0:05:46.063 ************ 2025-05-25 03:58:59.421582 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.421587 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.421592 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.421596 | orchestrator | 2025-05-25 03:58:59.421601 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-05-25 03:58:59.421613 | orchestrator | Sunday 25 May 2025 03:53:42 +0000 (0:00:00.337) 0:05:46.401 ************ 2025-05-25 03:58:59.421618 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.421623 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.421628 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.421633 | orchestrator | 2025-05-25 03:58:59.421637 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-05-25 03:58:59.421642 | orchestrator | Sunday 25 May 2025 03:53:43 +0000 (0:00:00.318) 0:05:46.719 ************ 2025-05-25 03:58:59.421647 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:58:59.421652 | orchestrator | 2025-05-25 03:58:59.421656 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-05-25 03:58:59.421661 | orchestrator | Sunday 25 May 2025 03:53:43 +0000 (0:00:00.736) 0:05:47.456 ************ 2025-05-25 03:58:59.421666 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.421671 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.421675 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.421680 | orchestrator | 2025-05-25 03:58:59.421685 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-05-25 03:58:59.421689 | orchestrator | Sunday 25 May 2025 03:53:45 +0000 (0:00:01.204) 0:05:48.660 ************ 2025-05-25 03:58:59.421694 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.421699 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.421704 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.421708 | orchestrator | 2025-05-25 03:58:59.421713 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-05-25 03:58:59.421718 | orchestrator | Sunday 25 May 2025 03:53:46 +0000 (0:00:01.138) 0:05:49.799 ************ 2025-05-25 03:58:59.421723 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.421727 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.421732 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.421737 | orchestrator | 2025-05-25 03:58:59.421741 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-05-25 03:58:59.421746 | orchestrator | Sunday 25 May 2025 03:53:48 +0000 (0:00:02.031) 0:05:51.830 ************ 2025-05-25 03:58:59.421751 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.421756 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.421760 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.421765 | orchestrator | 2025-05-25 03:58:59.421773 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-05-25 03:58:59.421777 | orchestrator | Sunday 25 May 2025 03:53:51 +0000 (0:00:02.936) 0:05:54.767 ************ 2025-05-25 03:58:59.421782 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.421787 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.421792 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-25 03:58:59.421797 | orchestrator | 2025-05-25 03:58:59.421801 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-05-25 03:58:59.421806 | orchestrator | Sunday 25 May 2025 03:53:51 +0000 (0:00:00.458) 0:05:55.225 ************ 2025-05-25 03:58:59.421811 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-05-25 03:58:59.421816 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-05-25 03:58:59.421834 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-05-25 03:58:59.421840 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-05-25 03:58:59.421845 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-05-25 03:58:59.421850 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-25 03:58:59.421855 | orchestrator | 2025-05-25 03:58:59.421864 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-05-25 03:58:59.421868 | orchestrator | Sunday 25 May 2025 03:54:21 +0000 (0:00:29.958) 0:06:25.184 ************ 2025-05-25 03:58:59.421873 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-25 03:58:59.421878 | orchestrator | 2025-05-25 03:58:59.421883 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-25 03:58:59.421887 | orchestrator | Sunday 25 May 2025 03:54:23 +0000 (0:00:01.586) 0:06:26.770 ************ 2025-05-25 03:58:59.421892 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.421897 | orchestrator | 2025-05-25 03:58:59.421916 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-05-25 03:58:59.421922 | orchestrator | Sunday 25 May 2025 03:54:24 +0000 (0:00:00.806) 0:06:27.577 ************ 2025-05-25 03:58:59.421927 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.421931 | orchestrator | 2025-05-25 03:58:59.421936 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-05-25 03:58:59.421941 | orchestrator | Sunday 25 May 2025 03:54:24 +0000 (0:00:00.160) 0:06:27.737 ************ 2025-05-25 03:58:59.421946 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-25 03:58:59.421950 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-25 03:58:59.421955 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-25 03:58:59.421960 | orchestrator | 2025-05-25 03:58:59.421965 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-05-25 03:58:59.421969 | orchestrator | Sunday 25 May 2025 03:54:30 +0000 (0:00:06.357) 0:06:34.095 ************ 2025-05-25 03:58:59.421974 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-25 03:58:59.421979 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-25 03:58:59.421984 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-25 03:58:59.421988 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-25 03:58:59.421993 | orchestrator | 2025-05-25 03:58:59.421998 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-25 03:58:59.422002 | orchestrator | Sunday 25 May 2025 03:54:35 +0000 (0:00:04.468) 0:06:38.563 ************ 2025-05-25 03:58:59.422007 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.422032 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.422038 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.422043 | orchestrator | 2025-05-25 03:58:59.422048 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-05-25 03:58:59.422053 | orchestrator | Sunday 25 May 2025 03:54:35 +0000 (0:00:00.910) 0:06:39.474 ************ 2025-05-25 03:58:59.422058 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:58:59.422063 | orchestrator | 2025-05-25 03:58:59.422067 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-05-25 03:58:59.422072 | orchestrator | Sunday 25 May 2025 03:54:36 +0000 (0:00:00.529) 0:06:40.003 ************ 2025-05-25 03:58:59.422077 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.422082 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.422086 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.422091 | orchestrator | 2025-05-25 03:58:59.422096 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-05-25 03:58:59.422101 | orchestrator | Sunday 25 May 2025 03:54:36 +0000 (0:00:00.314) 0:06:40.318 ************ 2025-05-25 03:58:59.422105 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.422110 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.422115 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.422120 | orchestrator | 2025-05-25 03:58:59.422125 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-05-25 03:58:59.422130 | orchestrator | Sunday 25 May 2025 03:54:38 +0000 (0:00:01.385) 0:06:41.704 ************ 2025-05-25 03:58:59.422138 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 03:58:59.422143 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 03:58:59.422148 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 03:58:59.422153 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.422158 | orchestrator | 2025-05-25 03:58:59.422166 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-05-25 03:58:59.422170 | orchestrator | Sunday 25 May 2025 03:54:38 +0000 (0:00:00.581) 0:06:42.285 ************ 2025-05-25 03:58:59.422175 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.422180 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.422185 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.422190 | orchestrator | 2025-05-25 03:58:59.422194 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-25 03:58:59.422199 | orchestrator | 2025-05-25 03:58:59.422204 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-25 03:58:59.422209 | orchestrator | Sunday 25 May 2025 03:54:39 +0000 (0:00:00.567) 0:06:42.852 ************ 2025-05-25 03:58:59.422213 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.422218 | orchestrator | 2025-05-25 03:58:59.422223 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-25 03:58:59.422244 | orchestrator | Sunday 25 May 2025 03:54:40 +0000 (0:00:00.688) 0:06:43.540 ************ 2025-05-25 03:58:59.422250 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.422255 | orchestrator | 2025-05-25 03:58:59.422260 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-25 03:58:59.422265 | orchestrator | Sunday 25 May 2025 03:54:40 +0000 (0:00:00.542) 0:06:44.082 ************ 2025-05-25 03:58:59.422269 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.422274 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.422279 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.422284 | orchestrator | 2025-05-25 03:58:59.422288 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-25 03:58:59.422293 | orchestrator | Sunday 25 May 2025 03:54:40 +0000 (0:00:00.388) 0:06:44.471 ************ 2025-05-25 03:58:59.422298 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.422303 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.422308 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.422312 | orchestrator | 2025-05-25 03:58:59.422317 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-25 03:58:59.422322 | orchestrator | Sunday 25 May 2025 03:54:41 +0000 (0:00:01.012) 0:06:45.483 ************ 2025-05-25 03:58:59.422327 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.422331 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.422336 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.422341 | orchestrator | 2025-05-25 03:58:59.422346 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-25 03:58:59.422350 | orchestrator | Sunday 25 May 2025 03:54:42 +0000 (0:00:00.679) 0:06:46.163 ************ 2025-05-25 03:58:59.422355 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.422360 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.422365 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.422369 | orchestrator | 2025-05-25 03:58:59.422374 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-25 03:58:59.422379 | orchestrator | Sunday 25 May 2025 03:54:43 +0000 (0:00:00.723) 0:06:46.887 ************ 2025-05-25 03:58:59.422384 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.422388 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.422393 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.422398 | orchestrator | 2025-05-25 03:58:59.422403 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-25 03:58:59.422411 | orchestrator | Sunday 25 May 2025 03:54:43 +0000 (0:00:00.282) 0:06:47.169 ************ 2025-05-25 03:58:59.422416 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.422421 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.422426 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.422431 | orchestrator | 2025-05-25 03:58:59.422435 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-25 03:58:59.422440 | orchestrator | Sunday 25 May 2025 03:54:44 +0000 (0:00:00.539) 0:06:47.709 ************ 2025-05-25 03:58:59.422445 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.422450 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.422454 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.422459 | orchestrator | 2025-05-25 03:58:59.422464 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-25 03:58:59.422469 | orchestrator | Sunday 25 May 2025 03:54:44 +0000 (0:00:00.300) 0:06:48.010 ************ 2025-05-25 03:58:59.422473 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.422478 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.422483 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.422488 | orchestrator | 2025-05-25 03:58:59.422492 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-25 03:58:59.422497 | orchestrator | Sunday 25 May 2025 03:54:45 +0000 (0:00:00.652) 0:06:48.662 ************ 2025-05-25 03:58:59.422502 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.422507 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.422511 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.422516 | orchestrator | 2025-05-25 03:58:59.422521 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-25 03:58:59.422525 | orchestrator | Sunday 25 May 2025 03:54:45 +0000 (0:00:00.636) 0:06:49.298 ************ 2025-05-25 03:58:59.422530 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.422535 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.422540 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.422544 | orchestrator | 2025-05-25 03:58:59.422549 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-25 03:58:59.422554 | orchestrator | Sunday 25 May 2025 03:54:46 +0000 (0:00:00.625) 0:06:49.924 ************ 2025-05-25 03:58:59.422558 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.422563 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.422568 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.422572 | orchestrator | 2025-05-25 03:58:59.422577 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-25 03:58:59.422582 | orchestrator | Sunday 25 May 2025 03:54:46 +0000 (0:00:00.314) 0:06:50.238 ************ 2025-05-25 03:58:59.422587 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.422592 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.422596 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.422601 | orchestrator | 2025-05-25 03:58:59.422608 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-25 03:58:59.422613 | orchestrator | Sunday 25 May 2025 03:54:47 +0000 (0:00:00.366) 0:06:50.604 ************ 2025-05-25 03:58:59.422618 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.422623 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.422628 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.422632 | orchestrator | 2025-05-25 03:58:59.422637 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-25 03:58:59.422642 | orchestrator | Sunday 25 May 2025 03:54:47 +0000 (0:00:00.320) 0:06:50.924 ************ 2025-05-25 03:58:59.422647 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.422651 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.422656 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.422661 | orchestrator | 2025-05-25 03:58:59.422666 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-25 03:58:59.422670 | orchestrator | Sunday 25 May 2025 03:54:48 +0000 (0:00:00.609) 0:06:51.534 ************ 2025-05-25 03:58:59.422681 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.422686 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.422691 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.422696 | orchestrator | 2025-05-25 03:58:59.422700 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-25 03:58:59.422705 | orchestrator | Sunday 25 May 2025 03:54:48 +0000 (0:00:00.316) 0:06:51.851 ************ 2025-05-25 03:58:59.422710 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.422715 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.422719 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.422724 | orchestrator | 2025-05-25 03:58:59.422729 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-25 03:58:59.422734 | orchestrator | Sunday 25 May 2025 03:54:48 +0000 (0:00:00.295) 0:06:52.147 ************ 2025-05-25 03:58:59.422738 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.422743 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.422748 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.422753 | orchestrator | 2025-05-25 03:58:59.422758 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-25 03:58:59.422762 | orchestrator | Sunday 25 May 2025 03:54:48 +0000 (0:00:00.279) 0:06:52.426 ************ 2025-05-25 03:58:59.422767 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.422772 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.422777 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.422781 | orchestrator | 2025-05-25 03:58:59.422786 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-25 03:58:59.422791 | orchestrator | Sunday 25 May 2025 03:54:49 +0000 (0:00:00.612) 0:06:53.038 ************ 2025-05-25 03:58:59.422796 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.422800 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.422805 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.422810 | orchestrator | 2025-05-25 03:58:59.422815 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-05-25 03:58:59.422820 | orchestrator | Sunday 25 May 2025 03:54:50 +0000 (0:00:00.515) 0:06:53.554 ************ 2025-05-25 03:58:59.422824 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.422829 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.422834 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.422839 | orchestrator | 2025-05-25 03:58:59.422843 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-05-25 03:58:59.422848 | orchestrator | Sunday 25 May 2025 03:54:50 +0000 (0:00:00.325) 0:06:53.880 ************ 2025-05-25 03:58:59.422853 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-25 03:58:59.422858 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 03:58:59.422863 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 03:58:59.422867 | orchestrator | 2025-05-25 03:58:59.422872 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-05-25 03:58:59.422877 | orchestrator | Sunday 25 May 2025 03:54:51 +0000 (0:00:00.864) 0:06:54.744 ************ 2025-05-25 03:58:59.422882 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.422886 | orchestrator | 2025-05-25 03:58:59.422891 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-05-25 03:58:59.422896 | orchestrator | Sunday 25 May 2025 03:54:51 +0000 (0:00:00.740) 0:06:55.485 ************ 2025-05-25 03:58:59.422901 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.422922 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.422927 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.422931 | orchestrator | 2025-05-25 03:58:59.422936 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-05-25 03:58:59.422941 | orchestrator | Sunday 25 May 2025 03:54:52 +0000 (0:00:00.313) 0:06:55.798 ************ 2025-05-25 03:58:59.422949 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.422954 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.422959 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.422963 | orchestrator | 2025-05-25 03:58:59.422968 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-05-25 03:58:59.422973 | orchestrator | Sunday 25 May 2025 03:54:52 +0000 (0:00:00.283) 0:06:56.082 ************ 2025-05-25 03:58:59.422978 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.422982 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.422987 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.422992 | orchestrator | 2025-05-25 03:58:59.422996 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-05-25 03:58:59.423001 | orchestrator | Sunday 25 May 2025 03:54:53 +0000 (0:00:01.017) 0:06:57.100 ************ 2025-05-25 03:58:59.423006 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.423010 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.423015 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.423020 | orchestrator | 2025-05-25 03:58:59.423025 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-05-25 03:58:59.423029 | orchestrator | Sunday 25 May 2025 03:54:53 +0000 (0:00:00.361) 0:06:57.461 ************ 2025-05-25 03:58:59.423037 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-25 03:58:59.423042 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-25 03:58:59.423047 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-25 03:58:59.423051 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-25 03:58:59.423056 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-25 03:58:59.423061 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-25 03:58:59.423065 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-25 03:58:59.423074 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-25 03:58:59.423079 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-25 03:58:59.423084 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-25 03:58:59.423088 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-25 03:58:59.423093 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-25 03:58:59.423098 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-25 03:58:59.423103 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-25 03:58:59.423107 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-25 03:58:59.423112 | orchestrator | 2025-05-25 03:58:59.423117 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-05-25 03:58:59.423121 | orchestrator | Sunday 25 May 2025 03:54:57 +0000 (0:00:03.142) 0:07:00.604 ************ 2025-05-25 03:58:59.423126 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.423131 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.423136 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.423140 | orchestrator | 2025-05-25 03:58:59.423145 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-05-25 03:58:59.423150 | orchestrator | Sunday 25 May 2025 03:54:57 +0000 (0:00:00.281) 0:07:00.885 ************ 2025-05-25 03:58:59.423155 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.423163 | orchestrator | 2025-05-25 03:58:59.423167 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-05-25 03:58:59.423172 | orchestrator | Sunday 25 May 2025 03:54:58 +0000 (0:00:00.759) 0:07:01.644 ************ 2025-05-25 03:58:59.423177 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-25 03:58:59.423182 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-25 03:58:59.423187 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-25 03:58:59.423191 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-25 03:58:59.423196 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-25 03:58:59.423201 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-25 03:58:59.423206 | orchestrator | 2025-05-25 03:58:59.423210 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-05-25 03:58:59.423215 | orchestrator | Sunday 25 May 2025 03:54:59 +0000 (0:00:00.928) 0:07:02.573 ************ 2025-05-25 03:58:59.423220 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 03:58:59.423225 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-25 03:58:59.423229 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-25 03:58:59.423234 | orchestrator | 2025-05-25 03:58:59.423239 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-05-25 03:58:59.423244 | orchestrator | Sunday 25 May 2025 03:55:00 +0000 (0:00:01.920) 0:07:04.493 ************ 2025-05-25 03:58:59.423248 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-25 03:58:59.423253 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-25 03:58:59.423258 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.423263 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-25 03:58:59.423267 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-25 03:58:59.423272 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.423277 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-25 03:58:59.423281 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-25 03:58:59.423286 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.423291 | orchestrator | 2025-05-25 03:58:59.423296 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-05-25 03:58:59.423300 | orchestrator | Sunday 25 May 2025 03:55:02 +0000 (0:00:01.389) 0:07:05.883 ************ 2025-05-25 03:58:59.423305 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-25 03:58:59.423310 | orchestrator | 2025-05-25 03:58:59.423315 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-05-25 03:58:59.423319 | orchestrator | Sunday 25 May 2025 03:55:04 +0000 (0:00:02.013) 0:07:07.896 ************ 2025-05-25 03:58:59.423324 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.423329 | orchestrator | 2025-05-25 03:58:59.423333 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-05-25 03:58:59.423341 | orchestrator | Sunday 25 May 2025 03:55:04 +0000 (0:00:00.544) 0:07:08.440 ************ 2025-05-25 03:58:59.423346 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0', 'data_vg': 'ceph-02ca1cf7-fa58-5bc0-a798-b7d21582c1b0'}) 2025-05-25 03:58:59.423351 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-33e996ff-67e1-5789-9eb3-97043475c088', 'data_vg': 'ceph-33e996ff-67e1-5789-9eb3-97043475c088'}) 2025-05-25 03:58:59.423356 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-02f362e7-7983-50b5-b688-a41104a01860', 'data_vg': 'ceph-02f362e7-7983-50b5-b688-a41104a01860'}) 2025-05-25 03:58:59.423364 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3ece5568-3437-595e-b3ba-b2f91a77c86c', 'data_vg': 'ceph-3ece5568-3437-595e-b3ba-b2f91a77c86c'}) 2025-05-25 03:58:59.423369 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-733a1394-dd45-5d63-8d82-63858202edf3', 'data_vg': 'ceph-733a1394-dd45-5d63-8d82-63858202edf3'}) 2025-05-25 03:58:59.423377 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b24cffad-8a1f-50fd-b816-ada28c3c4ac7', 'data_vg': 'ceph-b24cffad-8a1f-50fd-b816-ada28c3c4ac7'}) 2025-05-25 03:58:59.423382 | orchestrator | 2025-05-25 03:58:59.423387 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-25 03:58:59.423391 | orchestrator | Sunday 25 May 2025 03:55:43 +0000 (0:00:38.819) 0:07:47.260 ************ 2025-05-25 03:58:59.423396 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.423401 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.423405 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.423410 | orchestrator | 2025-05-25 03:58:59.423415 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-05-25 03:58:59.423420 | orchestrator | Sunday 25 May 2025 03:55:44 +0000 (0:00:00.512) 0:07:47.772 ************ 2025-05-25 03:58:59.423424 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.423429 | orchestrator | 2025-05-25 03:58:59.423434 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-05-25 03:58:59.423439 | orchestrator | Sunday 25 May 2025 03:55:44 +0000 (0:00:00.524) 0:07:48.297 ************ 2025-05-25 03:58:59.423444 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.423448 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.423453 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.423458 | orchestrator | 2025-05-25 03:58:59.423463 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-05-25 03:58:59.423467 | orchestrator | Sunday 25 May 2025 03:55:45 +0000 (0:00:00.627) 0:07:48.924 ************ 2025-05-25 03:58:59.423472 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.423477 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.423482 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.423486 | orchestrator | 2025-05-25 03:58:59.423491 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-05-25 03:58:59.423496 | orchestrator | Sunday 25 May 2025 03:55:48 +0000 (0:00:02.724) 0:07:51.649 ************ 2025-05-25 03:58:59.423500 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.423505 | orchestrator | 2025-05-25 03:58:59.423510 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-05-25 03:58:59.423515 | orchestrator | Sunday 25 May 2025 03:55:48 +0000 (0:00:00.511) 0:07:52.160 ************ 2025-05-25 03:58:59.423519 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.423524 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.423529 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.423534 | orchestrator | 2025-05-25 03:58:59.423538 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-05-25 03:58:59.423543 | orchestrator | Sunday 25 May 2025 03:55:49 +0000 (0:00:01.088) 0:07:53.249 ************ 2025-05-25 03:58:59.423548 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.423552 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.423557 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.423562 | orchestrator | 2025-05-25 03:58:59.423567 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-05-25 03:58:59.423571 | orchestrator | Sunday 25 May 2025 03:55:51 +0000 (0:00:01.305) 0:07:54.554 ************ 2025-05-25 03:58:59.423576 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.423581 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.423586 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.423590 | orchestrator | 2025-05-25 03:58:59.423595 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-05-25 03:58:59.423600 | orchestrator | Sunday 25 May 2025 03:55:52 +0000 (0:00:01.663) 0:07:56.217 ************ 2025-05-25 03:58:59.423605 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.423613 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.423618 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.423622 | orchestrator | 2025-05-25 03:58:59.423627 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-05-25 03:58:59.423632 | orchestrator | Sunday 25 May 2025 03:55:53 +0000 (0:00:00.315) 0:07:56.533 ************ 2025-05-25 03:58:59.423637 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.423641 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.423646 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.423651 | orchestrator | 2025-05-25 03:58:59.423655 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-05-25 03:58:59.423660 | orchestrator | Sunday 25 May 2025 03:55:53 +0000 (0:00:00.334) 0:07:56.868 ************ 2025-05-25 03:58:59.423665 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-05-25 03:58:59.423669 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-05-25 03:58:59.423674 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-05-25 03:58:59.423679 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-25 03:58:59.423686 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-05-25 03:58:59.423691 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-05-25 03:58:59.423696 | orchestrator | 2025-05-25 03:58:59.423701 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-05-25 03:58:59.423705 | orchestrator | Sunday 25 May 2025 03:55:54 +0000 (0:00:01.338) 0:07:58.206 ************ 2025-05-25 03:58:59.423710 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-05-25 03:58:59.423715 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-05-25 03:58:59.423720 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-05-25 03:58:59.423724 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-25 03:58:59.423729 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-05-25 03:58:59.423734 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-05-25 03:58:59.423738 | orchestrator | 2025-05-25 03:58:59.423743 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-05-25 03:58:59.423751 | orchestrator | Sunday 25 May 2025 03:55:56 +0000 (0:00:02.113) 0:08:00.319 ************ 2025-05-25 03:58:59.423756 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-05-25 03:58:59.423761 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-05-25 03:58:59.423765 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-05-25 03:58:59.423770 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-25 03:58:59.423775 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-05-25 03:58:59.423779 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-05-25 03:58:59.423784 | orchestrator | 2025-05-25 03:58:59.423789 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-05-25 03:58:59.423794 | orchestrator | Sunday 25 May 2025 03:56:00 +0000 (0:00:03.498) 0:08:03.817 ************ 2025-05-25 03:58:59.423798 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.423803 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.423808 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-25 03:58:59.423813 | orchestrator | 2025-05-25 03:58:59.423817 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-05-25 03:58:59.423822 | orchestrator | Sunday 25 May 2025 03:56:03 +0000 (0:00:02.977) 0:08:06.795 ************ 2025-05-25 03:58:59.423827 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.423831 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.423836 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-05-25 03:58:59.423841 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-25 03:58:59.423846 | orchestrator | 2025-05-25 03:58:59.423851 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-05-25 03:58:59.423855 | orchestrator | Sunday 25 May 2025 03:56:16 +0000 (0:00:12.928) 0:08:19.724 ************ 2025-05-25 03:58:59.423860 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.423870 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.423875 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.423880 | orchestrator | 2025-05-25 03:58:59.423884 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-25 03:58:59.423889 | orchestrator | Sunday 25 May 2025 03:56:17 +0000 (0:00:00.850) 0:08:20.575 ************ 2025-05-25 03:58:59.423894 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.423898 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.423941 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.423946 | orchestrator | 2025-05-25 03:58:59.423951 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-05-25 03:58:59.423956 | orchestrator | Sunday 25 May 2025 03:56:17 +0000 (0:00:00.607) 0:08:21.182 ************ 2025-05-25 03:58:59.423961 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.423966 | orchestrator | 2025-05-25 03:58:59.423970 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-05-25 03:58:59.423975 | orchestrator | Sunday 25 May 2025 03:56:18 +0000 (0:00:00.514) 0:08:21.697 ************ 2025-05-25 03:58:59.423980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 03:58:59.423984 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 03:58:59.423989 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 03:58:59.423994 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.423999 | orchestrator | 2025-05-25 03:58:59.424003 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-05-25 03:58:59.424008 | orchestrator | Sunday 25 May 2025 03:56:18 +0000 (0:00:00.377) 0:08:22.074 ************ 2025-05-25 03:58:59.424013 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424018 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.424022 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.424027 | orchestrator | 2025-05-25 03:58:59.424032 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-05-25 03:58:59.424037 | orchestrator | Sunday 25 May 2025 03:56:18 +0000 (0:00:00.285) 0:08:22.360 ************ 2025-05-25 03:58:59.424041 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424046 | orchestrator | 2025-05-25 03:58:59.424051 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-05-25 03:58:59.424055 | orchestrator | Sunday 25 May 2025 03:56:19 +0000 (0:00:00.216) 0:08:22.576 ************ 2025-05-25 03:58:59.424060 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424065 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.424070 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.424074 | orchestrator | 2025-05-25 03:58:59.424079 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-05-25 03:58:59.424084 | orchestrator | Sunday 25 May 2025 03:56:19 +0000 (0:00:00.555) 0:08:23.132 ************ 2025-05-25 03:58:59.424089 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424093 | orchestrator | 2025-05-25 03:58:59.424098 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-05-25 03:58:59.424103 | orchestrator | Sunday 25 May 2025 03:56:19 +0000 (0:00:00.213) 0:08:23.345 ************ 2025-05-25 03:58:59.424110 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424115 | orchestrator | 2025-05-25 03:58:59.424120 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-05-25 03:58:59.424125 | orchestrator | Sunday 25 May 2025 03:56:20 +0000 (0:00:00.223) 0:08:23.568 ************ 2025-05-25 03:58:59.424130 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424134 | orchestrator | 2025-05-25 03:58:59.424139 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-05-25 03:58:59.424144 | orchestrator | Sunday 25 May 2025 03:56:20 +0000 (0:00:00.129) 0:08:23.698 ************ 2025-05-25 03:58:59.424148 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424157 | orchestrator | 2025-05-25 03:58:59.424162 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-05-25 03:58:59.424166 | orchestrator | Sunday 25 May 2025 03:56:20 +0000 (0:00:00.236) 0:08:23.934 ************ 2025-05-25 03:58:59.424171 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424176 | orchestrator | 2025-05-25 03:58:59.424184 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-05-25 03:58:59.424189 | orchestrator | Sunday 25 May 2025 03:56:20 +0000 (0:00:00.280) 0:08:24.214 ************ 2025-05-25 03:58:59.424193 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 03:58:59.424198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 03:58:59.424203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 03:58:59.424208 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424212 | orchestrator | 2025-05-25 03:58:59.424217 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-05-25 03:58:59.424222 | orchestrator | Sunday 25 May 2025 03:56:21 +0000 (0:00:00.368) 0:08:24.583 ************ 2025-05-25 03:58:59.424227 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424231 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.424236 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.424241 | orchestrator | 2025-05-25 03:58:59.424246 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-05-25 03:58:59.424250 | orchestrator | Sunday 25 May 2025 03:56:21 +0000 (0:00:00.322) 0:08:24.905 ************ 2025-05-25 03:58:59.424255 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424260 | orchestrator | 2025-05-25 03:58:59.424265 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-05-25 03:58:59.424269 | orchestrator | Sunday 25 May 2025 03:56:22 +0000 (0:00:00.819) 0:08:25.724 ************ 2025-05-25 03:58:59.424274 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424279 | orchestrator | 2025-05-25 03:58:59.424284 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-25 03:58:59.424288 | orchestrator | 2025-05-25 03:58:59.424293 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-25 03:58:59.424298 | orchestrator | Sunday 25 May 2025 03:56:22 +0000 (0:00:00.731) 0:08:26.455 ************ 2025-05-25 03:58:59.424303 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.424308 | orchestrator | 2025-05-25 03:58:59.424313 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-25 03:58:59.424317 | orchestrator | Sunday 25 May 2025 03:56:24 +0000 (0:00:01.525) 0:08:27.981 ************ 2025-05-25 03:58:59.424322 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.424327 | orchestrator | 2025-05-25 03:58:59.424332 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-25 03:58:59.424336 | orchestrator | Sunday 25 May 2025 03:56:25 +0000 (0:00:01.123) 0:08:29.104 ************ 2025-05-25 03:58:59.424341 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424346 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.424351 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.424355 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.424360 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.424365 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.424369 | orchestrator | 2025-05-25 03:58:59.424374 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-25 03:58:59.424379 | orchestrator | Sunday 25 May 2025 03:56:26 +0000 (0:00:00.672) 0:08:29.777 ************ 2025-05-25 03:58:59.424384 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.424388 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.424396 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.424401 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.424406 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.424411 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.424415 | orchestrator | 2025-05-25 03:58:59.424420 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-25 03:58:59.424425 | orchestrator | Sunday 25 May 2025 03:56:27 +0000 (0:00:00.918) 0:08:30.696 ************ 2025-05-25 03:58:59.424430 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.424434 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.424439 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.424444 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.424448 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.424453 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.424458 | orchestrator | 2025-05-25 03:58:59.424463 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-25 03:58:59.424468 | orchestrator | Sunday 25 May 2025 03:56:28 +0000 (0:00:01.026) 0:08:31.722 ************ 2025-05-25 03:58:59.424472 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.424477 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.424481 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.424485 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.424490 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.424494 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.424499 | orchestrator | 2025-05-25 03:58:59.424503 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-25 03:58:59.424510 | orchestrator | Sunday 25 May 2025 03:56:29 +0000 (0:00:00.904) 0:08:32.627 ************ 2025-05-25 03:58:59.424515 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424519 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.424524 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.424528 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.424533 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.424537 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.424542 | orchestrator | 2025-05-25 03:58:59.424546 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-25 03:58:59.424551 | orchestrator | Sunday 25 May 2025 03:56:29 +0000 (0:00:00.842) 0:08:33.469 ************ 2025-05-25 03:58:59.424555 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.424560 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.424564 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.424569 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424573 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.424578 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.424582 | orchestrator | 2025-05-25 03:58:59.424589 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-25 03:58:59.424594 | orchestrator | Sunday 25 May 2025 03:56:30 +0000 (0:00:00.591) 0:08:34.061 ************ 2025-05-25 03:58:59.424598 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.424603 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.424607 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.424612 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424616 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.424621 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.424625 | orchestrator | 2025-05-25 03:58:59.424630 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-25 03:58:59.424634 | orchestrator | Sunday 25 May 2025 03:56:31 +0000 (0:00:00.798) 0:08:34.859 ************ 2025-05-25 03:58:59.424639 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.424643 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.424648 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.424652 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.424657 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.424661 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.424665 | orchestrator | 2025-05-25 03:58:59.424673 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-25 03:58:59.424678 | orchestrator | Sunday 25 May 2025 03:56:32 +0000 (0:00:00.993) 0:08:35.852 ************ 2025-05-25 03:58:59.424682 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.424687 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.424691 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.424696 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.424700 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.424704 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.424709 | orchestrator | 2025-05-25 03:58:59.424713 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-25 03:58:59.424718 | orchestrator | Sunday 25 May 2025 03:56:33 +0000 (0:00:01.524) 0:08:37.377 ************ 2025-05-25 03:58:59.424722 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.424727 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.424731 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.424736 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424740 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.424745 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.424749 | orchestrator | 2025-05-25 03:58:59.424754 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-25 03:58:59.424758 | orchestrator | Sunday 25 May 2025 03:56:34 +0000 (0:00:00.563) 0:08:37.940 ************ 2025-05-25 03:58:59.424763 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.424767 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.424772 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.424776 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424781 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.424785 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.424789 | orchestrator | 2025-05-25 03:58:59.424794 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-25 03:58:59.424799 | orchestrator | Sunday 25 May 2025 03:56:35 +0000 (0:00:00.760) 0:08:38.701 ************ 2025-05-25 03:58:59.424803 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.424808 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.424812 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.424831 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.424835 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.424840 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.424844 | orchestrator | 2025-05-25 03:58:59.424849 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-25 03:58:59.424854 | orchestrator | Sunday 25 May 2025 03:56:35 +0000 (0:00:00.622) 0:08:39.323 ************ 2025-05-25 03:58:59.424858 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.424863 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.424867 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.424871 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.424876 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.424880 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.424885 | orchestrator | 2025-05-25 03:58:59.424889 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-25 03:58:59.424894 | orchestrator | Sunday 25 May 2025 03:56:36 +0000 (0:00:00.816) 0:08:40.139 ************ 2025-05-25 03:58:59.424898 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.424914 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.424922 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.424929 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.424937 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.424942 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.424946 | orchestrator | 2025-05-25 03:58:59.424951 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-25 03:58:59.424955 | orchestrator | Sunday 25 May 2025 03:56:37 +0000 (0:00:00.631) 0:08:40.771 ************ 2025-05-25 03:58:59.424960 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.424968 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.424972 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.424976 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.424981 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.424985 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.424990 | orchestrator | 2025-05-25 03:58:59.424994 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-25 03:58:59.425002 | orchestrator | Sunday 25 May 2025 03:56:38 +0000 (0:00:00.865) 0:08:41.637 ************ 2025-05-25 03:58:59.425006 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:58:59.425011 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:58:59.425015 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:58:59.425019 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.425024 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.425028 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.425033 | orchestrator | 2025-05-25 03:58:59.425037 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-25 03:58:59.425042 | orchestrator | Sunday 25 May 2025 03:56:38 +0000 (0:00:00.584) 0:08:42.221 ************ 2025-05-25 03:58:59.425046 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.425051 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.425055 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.425059 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.425064 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.425068 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.425073 | orchestrator | 2025-05-25 03:58:59.425080 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-25 03:58:59.425085 | orchestrator | Sunday 25 May 2025 03:56:39 +0000 (0:00:00.768) 0:08:42.990 ************ 2025-05-25 03:58:59.425089 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.425094 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.425098 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.425102 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.425107 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.425111 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.425116 | orchestrator | 2025-05-25 03:58:59.425120 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-25 03:58:59.425125 | orchestrator | Sunday 25 May 2025 03:56:40 +0000 (0:00:00.623) 0:08:43.614 ************ 2025-05-25 03:58:59.425129 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.425134 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.425138 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.425142 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.425147 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.425151 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.425156 | orchestrator | 2025-05-25 03:58:59.425160 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-05-25 03:58:59.425164 | orchestrator | Sunday 25 May 2025 03:56:41 +0000 (0:00:01.218) 0:08:44.832 ************ 2025-05-25 03:58:59.425169 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.425173 | orchestrator | 2025-05-25 03:58:59.425178 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-05-25 03:58:59.425182 | orchestrator | Sunday 25 May 2025 03:56:45 +0000 (0:00:03.805) 0:08:48.637 ************ 2025-05-25 03:58:59.425187 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.425191 | orchestrator | 2025-05-25 03:58:59.425196 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-05-25 03:58:59.425200 | orchestrator | Sunday 25 May 2025 03:56:46 +0000 (0:00:01.855) 0:08:50.493 ************ 2025-05-25 03:58:59.425204 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.425209 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.425213 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.425218 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.425222 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.425227 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.425234 | orchestrator | 2025-05-25 03:58:59.425239 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-05-25 03:58:59.425244 | orchestrator | Sunday 25 May 2025 03:56:48 +0000 (0:00:01.784) 0:08:52.277 ************ 2025-05-25 03:58:59.425248 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.425252 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.425257 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.425261 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.425266 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.425270 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.425274 | orchestrator | 2025-05-25 03:58:59.425279 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-05-25 03:58:59.425283 | orchestrator | Sunday 25 May 2025 03:56:49 +0000 (0:00:00.968) 0:08:53.246 ************ 2025-05-25 03:58:59.425288 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.425293 | orchestrator | 2025-05-25 03:58:59.425297 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-05-25 03:58:59.425302 | orchestrator | Sunday 25 May 2025 03:56:50 +0000 (0:00:01.220) 0:08:54.467 ************ 2025-05-25 03:58:59.425306 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.425310 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.425315 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.425319 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.425324 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.425328 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.425332 | orchestrator | 2025-05-25 03:58:59.425337 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-05-25 03:58:59.425341 | orchestrator | Sunday 25 May 2025 03:56:52 +0000 (0:00:01.814) 0:08:56.282 ************ 2025-05-25 03:58:59.425346 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.425350 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.425355 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.425359 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.425363 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.425368 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.425372 | orchestrator | 2025-05-25 03:58:59.425377 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-05-25 03:58:59.425381 | orchestrator | Sunday 25 May 2025 03:56:55 +0000 (0:00:03.238) 0:08:59.520 ************ 2025-05-25 03:58:59.425386 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.425390 | orchestrator | 2025-05-25 03:58:59.425395 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-05-25 03:58:59.425402 | orchestrator | Sunday 25 May 2025 03:56:57 +0000 (0:00:01.261) 0:09:00.781 ************ 2025-05-25 03:58:59.425407 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.425411 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.425415 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.425420 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.425424 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.425429 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.425433 | orchestrator | 2025-05-25 03:58:59.425438 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-05-25 03:58:59.425442 | orchestrator | Sunday 25 May 2025 03:56:58 +0000 (0:00:00.804) 0:09:01.585 ************ 2025-05-25 03:58:59.425447 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:58:59.425451 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:58:59.425455 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:58:59.425460 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.425464 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.425469 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.425476 | orchestrator | 2025-05-25 03:58:59.425481 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-05-25 03:58:59.425492 | orchestrator | Sunday 25 May 2025 03:57:00 +0000 (0:00:02.290) 0:09:03.876 ************ 2025-05-25 03:58:59.425497 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:58:59.425501 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:58:59.425506 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:58:59.425510 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.425514 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.425519 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.425523 | orchestrator | 2025-05-25 03:58:59.425528 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-25 03:58:59.425532 | orchestrator | 2025-05-25 03:58:59.425537 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-25 03:58:59.425541 | orchestrator | Sunday 25 May 2025 03:57:01 +0000 (0:00:01.206) 0:09:05.082 ************ 2025-05-25 03:58:59.425546 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.425550 | orchestrator | 2025-05-25 03:58:59.425555 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-25 03:58:59.425559 | orchestrator | Sunday 25 May 2025 03:57:02 +0000 (0:00:00.502) 0:09:05.584 ************ 2025-05-25 03:58:59.425564 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.425568 | orchestrator | 2025-05-25 03:58:59.425572 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-25 03:58:59.425577 | orchestrator | Sunday 25 May 2025 03:57:02 +0000 (0:00:00.758) 0:09:06.343 ************ 2025-05-25 03:58:59.425581 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.425586 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.425590 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.425595 | orchestrator | 2025-05-25 03:58:59.425599 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-25 03:58:59.425604 | orchestrator | Sunday 25 May 2025 03:57:03 +0000 (0:00:00.298) 0:09:06.642 ************ 2025-05-25 03:58:59.425608 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.425613 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.425617 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.425621 | orchestrator | 2025-05-25 03:58:59.425626 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-25 03:58:59.425630 | orchestrator | Sunday 25 May 2025 03:57:03 +0000 (0:00:00.676) 0:09:07.319 ************ 2025-05-25 03:58:59.425635 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.425639 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.425643 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.425648 | orchestrator | 2025-05-25 03:58:59.425653 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-25 03:58:59.425657 | orchestrator | Sunday 25 May 2025 03:57:04 +0000 (0:00:01.080) 0:09:08.399 ************ 2025-05-25 03:58:59.425662 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.425666 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.425670 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.425675 | orchestrator | 2025-05-25 03:58:59.425679 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-25 03:58:59.425684 | orchestrator | Sunday 25 May 2025 03:57:05 +0000 (0:00:00.813) 0:09:09.213 ************ 2025-05-25 03:58:59.425688 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.425693 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.425697 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.425702 | orchestrator | 2025-05-25 03:58:59.425706 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-25 03:58:59.425710 | orchestrator | Sunday 25 May 2025 03:57:06 +0000 (0:00:00.324) 0:09:09.537 ************ 2025-05-25 03:58:59.425715 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.425723 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.425727 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.425732 | orchestrator | 2025-05-25 03:58:59.425736 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-25 03:58:59.425741 | orchestrator | Sunday 25 May 2025 03:57:06 +0000 (0:00:00.293) 0:09:09.830 ************ 2025-05-25 03:58:59.425745 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.425750 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.425754 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.425758 | orchestrator | 2025-05-25 03:58:59.425763 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-25 03:58:59.425767 | orchestrator | Sunday 25 May 2025 03:57:06 +0000 (0:00:00.572) 0:09:10.402 ************ 2025-05-25 03:58:59.425772 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.425776 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.425781 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.425785 | orchestrator | 2025-05-25 03:58:59.425790 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-25 03:58:59.425794 | orchestrator | Sunday 25 May 2025 03:57:07 +0000 (0:00:00.828) 0:09:11.231 ************ 2025-05-25 03:58:59.425799 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.425803 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.425807 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.425812 | orchestrator | 2025-05-25 03:58:59.425819 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-25 03:58:59.425824 | orchestrator | Sunday 25 May 2025 03:57:08 +0000 (0:00:00.752) 0:09:11.983 ************ 2025-05-25 03:58:59.425828 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.425833 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.425837 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.425841 | orchestrator | 2025-05-25 03:58:59.425846 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-25 03:58:59.425850 | orchestrator | Sunday 25 May 2025 03:57:08 +0000 (0:00:00.299) 0:09:12.283 ************ 2025-05-25 03:58:59.425855 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.425859 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.425864 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.425868 | orchestrator | 2025-05-25 03:58:59.425872 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-25 03:58:59.425877 | orchestrator | Sunday 25 May 2025 03:57:09 +0000 (0:00:00.560) 0:09:12.844 ************ 2025-05-25 03:58:59.425884 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.425888 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.425893 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.425897 | orchestrator | 2025-05-25 03:58:59.425929 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-25 03:58:59.425935 | orchestrator | Sunday 25 May 2025 03:57:09 +0000 (0:00:00.356) 0:09:13.200 ************ 2025-05-25 03:58:59.425940 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.425945 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.425949 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.425953 | orchestrator | 2025-05-25 03:58:59.425958 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-25 03:58:59.425963 | orchestrator | Sunday 25 May 2025 03:57:10 +0000 (0:00:00.357) 0:09:13.558 ************ 2025-05-25 03:58:59.425967 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.425972 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.425976 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.425980 | orchestrator | 2025-05-25 03:58:59.425985 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-25 03:58:59.425989 | orchestrator | Sunday 25 May 2025 03:57:10 +0000 (0:00:00.314) 0:09:13.872 ************ 2025-05-25 03:58:59.425994 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.425998 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.426003 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.426011 | orchestrator | 2025-05-25 03:58:59.426034 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-25 03:58:59.426039 | orchestrator | Sunday 25 May 2025 03:57:10 +0000 (0:00:00.480) 0:09:14.353 ************ 2025-05-25 03:58:59.426043 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.426047 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.426051 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.426055 | orchestrator | 2025-05-25 03:58:59.426059 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-25 03:58:59.426063 | orchestrator | Sunday 25 May 2025 03:57:11 +0000 (0:00:00.277) 0:09:14.631 ************ 2025-05-25 03:58:59.426067 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.426071 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.426075 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.426079 | orchestrator | 2025-05-25 03:58:59.426083 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-25 03:58:59.426088 | orchestrator | Sunday 25 May 2025 03:57:11 +0000 (0:00:00.244) 0:09:14.875 ************ 2025-05-25 03:58:59.426092 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.426096 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.426100 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.426104 | orchestrator | 2025-05-25 03:58:59.426108 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-25 03:58:59.426112 | orchestrator | Sunday 25 May 2025 03:57:11 +0000 (0:00:00.353) 0:09:15.229 ************ 2025-05-25 03:58:59.426116 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.426120 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.426124 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.426128 | orchestrator | 2025-05-25 03:58:59.426132 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-05-25 03:58:59.426137 | orchestrator | Sunday 25 May 2025 03:57:12 +0000 (0:00:00.660) 0:09:15.889 ************ 2025-05-25 03:58:59.426141 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.426145 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.426149 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-25 03:58:59.426153 | orchestrator | 2025-05-25 03:58:59.426157 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-05-25 03:58:59.426161 | orchestrator | Sunday 25 May 2025 03:57:12 +0000 (0:00:00.382) 0:09:16.272 ************ 2025-05-25 03:58:59.426165 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-25 03:58:59.426169 | orchestrator | 2025-05-25 03:58:59.426173 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-05-25 03:58:59.426177 | orchestrator | Sunday 25 May 2025 03:57:14 +0000 (0:00:01.980) 0:09:18.252 ************ 2025-05-25 03:58:59.426182 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-25 03:58:59.426188 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.426192 | orchestrator | 2025-05-25 03:58:59.426196 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-05-25 03:58:59.426200 | orchestrator | Sunday 25 May 2025 03:57:14 +0000 (0:00:00.197) 0:09:18.450 ************ 2025-05-25 03:58:59.426205 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-25 03:58:59.426214 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-25 03:58:59.426223 | orchestrator | 2025-05-25 03:58:59.426227 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-05-25 03:58:59.426231 | orchestrator | Sunday 25 May 2025 03:57:22 +0000 (0:00:07.672) 0:09:26.122 ************ 2025-05-25 03:58:59.426235 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-25 03:58:59.426239 | orchestrator | 2025-05-25 03:58:59.426243 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-05-25 03:58:59.426247 | orchestrator | Sunday 25 May 2025 03:57:26 +0000 (0:00:03.607) 0:09:29.730 ************ 2025-05-25 03:58:59.426254 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.426259 | orchestrator | 2025-05-25 03:58:59.426263 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-05-25 03:58:59.426267 | orchestrator | Sunday 25 May 2025 03:57:26 +0000 (0:00:00.567) 0:09:30.297 ************ 2025-05-25 03:58:59.426271 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-25 03:58:59.426275 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-25 03:58:59.426279 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-25 03:58:59.426283 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-25 03:58:59.426287 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-25 03:58:59.426291 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-25 03:58:59.426295 | orchestrator | 2025-05-25 03:58:59.426299 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-05-25 03:58:59.426303 | orchestrator | Sunday 25 May 2025 03:57:27 +0000 (0:00:01.163) 0:09:31.461 ************ 2025-05-25 03:58:59.426307 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 03:58:59.426311 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-25 03:58:59.426316 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-25 03:58:59.426320 | orchestrator | 2025-05-25 03:58:59.426324 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-05-25 03:58:59.426328 | orchestrator | Sunday 25 May 2025 03:57:30 +0000 (0:00:02.270) 0:09:33.731 ************ 2025-05-25 03:58:59.426332 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-25 03:58:59.426336 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-25 03:58:59.426340 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.426344 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-25 03:58:59.426348 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-25 03:58:59.426353 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.426357 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-25 03:58:59.426361 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-25 03:58:59.426365 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.426369 | orchestrator | 2025-05-25 03:58:59.426373 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-05-25 03:58:59.426377 | orchestrator | Sunday 25 May 2025 03:57:31 +0000 (0:00:01.456) 0:09:35.187 ************ 2025-05-25 03:58:59.426381 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.426385 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.426389 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.426393 | orchestrator | 2025-05-25 03:58:59.426397 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-05-25 03:58:59.426401 | orchestrator | Sunday 25 May 2025 03:57:34 +0000 (0:00:02.643) 0:09:37.831 ************ 2025-05-25 03:58:59.426405 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.426409 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.426413 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.426417 | orchestrator | 2025-05-25 03:58:59.426421 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-05-25 03:58:59.426429 | orchestrator | Sunday 25 May 2025 03:57:34 +0000 (0:00:00.323) 0:09:38.155 ************ 2025-05-25 03:58:59.426433 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.426437 | orchestrator | 2025-05-25 03:58:59.426441 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-05-25 03:58:59.426445 | orchestrator | Sunday 25 May 2025 03:57:35 +0000 (0:00:00.743) 0:09:38.898 ************ 2025-05-25 03:58:59.426449 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.426453 | orchestrator | 2025-05-25 03:58:59.426458 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-05-25 03:58:59.426462 | orchestrator | Sunday 25 May 2025 03:57:35 +0000 (0:00:00.529) 0:09:39.427 ************ 2025-05-25 03:58:59.426466 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.426470 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.426474 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.426478 | orchestrator | 2025-05-25 03:58:59.426482 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-05-25 03:58:59.426486 | orchestrator | Sunday 25 May 2025 03:57:37 +0000 (0:00:01.185) 0:09:40.612 ************ 2025-05-25 03:58:59.426490 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.426494 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.426498 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.426502 | orchestrator | 2025-05-25 03:58:59.426506 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-05-25 03:58:59.426513 | orchestrator | Sunday 25 May 2025 03:57:38 +0000 (0:00:01.442) 0:09:42.054 ************ 2025-05-25 03:58:59.426517 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.426521 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.426525 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.426529 | orchestrator | 2025-05-25 03:58:59.426534 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-05-25 03:58:59.426538 | orchestrator | Sunday 25 May 2025 03:57:40 +0000 (0:00:01.664) 0:09:43.719 ************ 2025-05-25 03:58:59.426542 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.426546 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.426550 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.426554 | orchestrator | 2025-05-25 03:58:59.426558 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-05-25 03:58:59.426562 | orchestrator | Sunday 25 May 2025 03:57:42 +0000 (0:00:01.926) 0:09:45.645 ************ 2025-05-25 03:58:59.426566 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.426573 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.426577 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.426581 | orchestrator | 2025-05-25 03:58:59.426585 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-25 03:58:59.426589 | orchestrator | Sunday 25 May 2025 03:57:43 +0000 (0:00:01.506) 0:09:47.152 ************ 2025-05-25 03:58:59.426593 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.426597 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.426601 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.426605 | orchestrator | 2025-05-25 03:58:59.426609 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-05-25 03:58:59.426614 | orchestrator | Sunday 25 May 2025 03:57:44 +0000 (0:00:00.686) 0:09:47.839 ************ 2025-05-25 03:58:59.426618 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.426622 | orchestrator | 2025-05-25 03:58:59.426626 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-05-25 03:58:59.426630 | orchestrator | Sunday 25 May 2025 03:57:44 +0000 (0:00:00.684) 0:09:48.524 ************ 2025-05-25 03:58:59.426634 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.426641 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.426645 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.426649 | orchestrator | 2025-05-25 03:58:59.426653 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-05-25 03:58:59.426657 | orchestrator | Sunday 25 May 2025 03:57:45 +0000 (0:00:00.346) 0:09:48.870 ************ 2025-05-25 03:58:59.426661 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.426666 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.426670 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.426674 | orchestrator | 2025-05-25 03:58:59.426678 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-05-25 03:58:59.426682 | orchestrator | Sunday 25 May 2025 03:57:46 +0000 (0:00:01.154) 0:09:50.024 ************ 2025-05-25 03:58:59.426686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 03:58:59.426690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 03:58:59.426694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 03:58:59.426698 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.426702 | orchestrator | 2025-05-25 03:58:59.426706 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-05-25 03:58:59.426711 | orchestrator | Sunday 25 May 2025 03:57:47 +0000 (0:00:00.866) 0:09:50.890 ************ 2025-05-25 03:58:59.426715 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.426719 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.426723 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.426727 | orchestrator | 2025-05-25 03:58:59.426731 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-25 03:58:59.426735 | orchestrator | 2025-05-25 03:58:59.426739 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-25 03:58:59.426743 | orchestrator | Sunday 25 May 2025 03:57:48 +0000 (0:00:00.772) 0:09:51.663 ************ 2025-05-25 03:58:59.426747 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.426751 | orchestrator | 2025-05-25 03:58:59.426755 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-25 03:58:59.426759 | orchestrator | Sunday 25 May 2025 03:57:48 +0000 (0:00:00.480) 0:09:52.143 ************ 2025-05-25 03:58:59.426763 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.426768 | orchestrator | 2025-05-25 03:58:59.426772 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-25 03:58:59.426776 | orchestrator | Sunday 25 May 2025 03:57:49 +0000 (0:00:00.720) 0:09:52.863 ************ 2025-05-25 03:58:59.426780 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.426784 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.426788 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.426792 | orchestrator | 2025-05-25 03:58:59.426796 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-25 03:58:59.426800 | orchestrator | Sunday 25 May 2025 03:57:49 +0000 (0:00:00.289) 0:09:53.152 ************ 2025-05-25 03:58:59.426804 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.426808 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.426812 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.426816 | orchestrator | 2025-05-25 03:58:59.426820 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-25 03:58:59.426824 | orchestrator | Sunday 25 May 2025 03:57:50 +0000 (0:00:00.730) 0:09:53.883 ************ 2025-05-25 03:58:59.426828 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.426833 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.426837 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.426841 | orchestrator | 2025-05-25 03:58:59.426845 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-25 03:58:59.426849 | orchestrator | Sunday 25 May 2025 03:57:51 +0000 (0:00:00.704) 0:09:54.587 ************ 2025-05-25 03:58:59.426859 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.426863 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.426867 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.426871 | orchestrator | 2025-05-25 03:58:59.426875 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-25 03:58:59.426880 | orchestrator | Sunday 25 May 2025 03:57:52 +0000 (0:00:00.990) 0:09:55.578 ************ 2025-05-25 03:58:59.426884 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.426888 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.426892 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.426896 | orchestrator | 2025-05-25 03:58:59.426900 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-25 03:58:59.426919 | orchestrator | Sunday 25 May 2025 03:57:52 +0000 (0:00:00.308) 0:09:55.886 ************ 2025-05-25 03:58:59.426923 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.426928 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.426932 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.426936 | orchestrator | 2025-05-25 03:58:59.426942 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-25 03:58:59.426946 | orchestrator | Sunday 25 May 2025 03:57:52 +0000 (0:00:00.285) 0:09:56.171 ************ 2025-05-25 03:58:59.426950 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.426955 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.426959 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.426963 | orchestrator | 2025-05-25 03:58:59.426967 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-25 03:58:59.426971 | orchestrator | Sunday 25 May 2025 03:57:52 +0000 (0:00:00.287) 0:09:56.459 ************ 2025-05-25 03:58:59.426975 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.426979 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.426983 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.426987 | orchestrator | 2025-05-25 03:58:59.426991 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-25 03:58:59.426995 | orchestrator | Sunday 25 May 2025 03:57:53 +0000 (0:00:00.991) 0:09:57.450 ************ 2025-05-25 03:58:59.426999 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.427003 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.427007 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.427011 | orchestrator | 2025-05-25 03:58:59.427015 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-25 03:58:59.427019 | orchestrator | Sunday 25 May 2025 03:57:54 +0000 (0:00:00.710) 0:09:58.161 ************ 2025-05-25 03:58:59.427024 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.427028 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.427032 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.427036 | orchestrator | 2025-05-25 03:58:59.427040 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-25 03:58:59.427044 | orchestrator | Sunday 25 May 2025 03:57:54 +0000 (0:00:00.306) 0:09:58.468 ************ 2025-05-25 03:58:59.427048 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.427052 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.427056 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.427060 | orchestrator | 2025-05-25 03:58:59.427064 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-25 03:58:59.427068 | orchestrator | Sunday 25 May 2025 03:57:55 +0000 (0:00:00.294) 0:09:58.762 ************ 2025-05-25 03:58:59.427072 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.427076 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.427080 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.427084 | orchestrator | 2025-05-25 03:58:59.427088 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-25 03:58:59.427092 | orchestrator | Sunday 25 May 2025 03:57:55 +0000 (0:00:00.606) 0:09:59.369 ************ 2025-05-25 03:58:59.427096 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.427103 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.427108 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.427112 | orchestrator | 2025-05-25 03:58:59.427116 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-25 03:58:59.427120 | orchestrator | Sunday 25 May 2025 03:57:56 +0000 (0:00:00.330) 0:09:59.699 ************ 2025-05-25 03:58:59.427124 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.427128 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.427132 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.427136 | orchestrator | 2025-05-25 03:58:59.427140 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-25 03:58:59.427144 | orchestrator | Sunday 25 May 2025 03:57:56 +0000 (0:00:00.325) 0:10:00.025 ************ 2025-05-25 03:58:59.427148 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.427152 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.427156 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.427160 | orchestrator | 2025-05-25 03:58:59.427164 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-25 03:58:59.427168 | orchestrator | Sunday 25 May 2025 03:57:56 +0000 (0:00:00.272) 0:10:00.298 ************ 2025-05-25 03:58:59.427172 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.427176 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.427180 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.427184 | orchestrator | 2025-05-25 03:58:59.427188 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-25 03:58:59.427192 | orchestrator | Sunday 25 May 2025 03:57:57 +0000 (0:00:00.551) 0:10:00.849 ************ 2025-05-25 03:58:59.427196 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.427200 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.427204 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.427208 | orchestrator | 2025-05-25 03:58:59.427212 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-25 03:58:59.427216 | orchestrator | Sunday 25 May 2025 03:57:57 +0000 (0:00:00.285) 0:10:01.135 ************ 2025-05-25 03:58:59.427220 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.427224 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.427228 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.427232 | orchestrator | 2025-05-25 03:58:59.427236 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-25 03:58:59.427240 | orchestrator | Sunday 25 May 2025 03:57:57 +0000 (0:00:00.355) 0:10:01.490 ************ 2025-05-25 03:58:59.427244 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.427248 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.427255 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.427259 | orchestrator | 2025-05-25 03:58:59.427263 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-05-25 03:58:59.427267 | orchestrator | Sunday 25 May 2025 03:57:58 +0000 (0:00:00.748) 0:10:02.239 ************ 2025-05-25 03:58:59.427271 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.427275 | orchestrator | 2025-05-25 03:58:59.427279 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-05-25 03:58:59.427283 | orchestrator | Sunday 25 May 2025 03:57:59 +0000 (0:00:00.497) 0:10:02.737 ************ 2025-05-25 03:58:59.427287 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 03:58:59.427291 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-25 03:58:59.427295 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-25 03:58:59.427299 | orchestrator | 2025-05-25 03:58:59.427306 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-05-25 03:58:59.427310 | orchestrator | Sunday 25 May 2025 03:58:01 +0000 (0:00:02.009) 0:10:04.746 ************ 2025-05-25 03:58:59.427314 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-25 03:58:59.427318 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-25 03:58:59.427326 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.427330 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-25 03:58:59.427335 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-25 03:58:59.427339 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.427343 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-25 03:58:59.427347 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-25 03:58:59.427351 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.427355 | orchestrator | 2025-05-25 03:58:59.427359 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-05-25 03:58:59.427363 | orchestrator | Sunday 25 May 2025 03:58:02 +0000 (0:00:01.405) 0:10:06.151 ************ 2025-05-25 03:58:59.427367 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.427371 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.427375 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.427379 | orchestrator | 2025-05-25 03:58:59.427383 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-05-25 03:58:59.427387 | orchestrator | Sunday 25 May 2025 03:58:02 +0000 (0:00:00.301) 0:10:06.453 ************ 2025-05-25 03:58:59.427391 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.427395 | orchestrator | 2025-05-25 03:58:59.427399 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-05-25 03:58:59.427403 | orchestrator | Sunday 25 May 2025 03:58:03 +0000 (0:00:00.526) 0:10:06.980 ************ 2025-05-25 03:58:59.427407 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-25 03:58:59.427411 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-25 03:58:59.427415 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-25 03:58:59.427419 | orchestrator | 2025-05-25 03:58:59.427424 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-05-25 03:58:59.427428 | orchestrator | Sunday 25 May 2025 03:58:04 +0000 (0:00:01.260) 0:10:08.240 ************ 2025-05-25 03:58:59.427432 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 03:58:59.427436 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-25 03:58:59.427440 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 03:58:59.427444 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 03:58:59.427448 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-25 03:58:59.427452 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-25 03:58:59.427456 | orchestrator | 2025-05-25 03:58:59.427460 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-05-25 03:58:59.427464 | orchestrator | Sunday 25 May 2025 03:58:08 +0000 (0:00:04.134) 0:10:12.375 ************ 2025-05-25 03:58:59.427468 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 03:58:59.427472 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-25 03:58:59.427476 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 03:58:59.427480 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-25 03:58:59.427484 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 03:58:59.427491 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-25 03:58:59.427495 | orchestrator | 2025-05-25 03:58:59.427499 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-05-25 03:58:59.427506 | orchestrator | Sunday 25 May 2025 03:58:10 +0000 (0:00:02.099) 0:10:14.474 ************ 2025-05-25 03:58:59.427510 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-25 03:58:59.427514 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.427518 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-25 03:58:59.427522 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.427526 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-25 03:58:59.427530 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.427534 | orchestrator | 2025-05-25 03:58:59.427538 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-05-25 03:58:59.427542 | orchestrator | Sunday 25 May 2025 03:58:12 +0000 (0:00:01.313) 0:10:15.787 ************ 2025-05-25 03:58:59.427546 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-25 03:58:59.427550 | orchestrator | 2025-05-25 03:58:59.427554 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-05-25 03:58:59.427560 | orchestrator | Sunday 25 May 2025 03:58:12 +0000 (0:00:00.215) 0:10:16.003 ************ 2025-05-25 03:58:59.427564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 03:58:59.427569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 03:58:59.427573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 03:58:59.427577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 03:58:59.427581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 03:58:59.427585 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.427589 | orchestrator | 2025-05-25 03:58:59.427593 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-05-25 03:58:59.427597 | orchestrator | Sunday 25 May 2025 03:58:13 +0000 (0:00:00.815) 0:10:16.818 ************ 2025-05-25 03:58:59.427601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 03:58:59.427605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 03:58:59.427610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 03:58:59.427614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 03:58:59.427618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 03:58:59.427622 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.427626 | orchestrator | 2025-05-25 03:58:59.427630 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-05-25 03:58:59.427634 | orchestrator | Sunday 25 May 2025 03:58:14 +0000 (0:00:01.091) 0:10:17.910 ************ 2025-05-25 03:58:59.427638 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-25 03:58:59.427642 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-25 03:58:59.427650 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-25 03:58:59.427654 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-25 03:58:59.427658 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-25 03:58:59.427662 | orchestrator | 2025-05-25 03:58:59.427666 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-05-25 03:58:59.427670 | orchestrator | Sunday 25 May 2025 03:58:45 +0000 (0:00:31.452) 0:10:49.362 ************ 2025-05-25 03:58:59.427674 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.427678 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.427682 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.427686 | orchestrator | 2025-05-25 03:58:59.427690 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-05-25 03:58:59.427694 | orchestrator | Sunday 25 May 2025 03:58:46 +0000 (0:00:00.315) 0:10:49.678 ************ 2025-05-25 03:58:59.427698 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.427702 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.427706 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.427711 | orchestrator | 2025-05-25 03:58:59.427715 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-05-25 03:58:59.427719 | orchestrator | Sunday 25 May 2025 03:58:46 +0000 (0:00:00.283) 0:10:49.962 ************ 2025-05-25 03:58:59.427725 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.427729 | orchestrator | 2025-05-25 03:58:59.427733 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-05-25 03:58:59.427737 | orchestrator | Sunday 25 May 2025 03:58:47 +0000 (0:00:00.730) 0:10:50.692 ************ 2025-05-25 03:58:59.427741 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.427745 | orchestrator | 2025-05-25 03:58:59.427749 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-05-25 03:58:59.427753 | orchestrator | Sunday 25 May 2025 03:58:47 +0000 (0:00:00.505) 0:10:51.197 ************ 2025-05-25 03:58:59.427757 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.427762 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.427766 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.427770 | orchestrator | 2025-05-25 03:58:59.427776 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-05-25 03:58:59.427780 | orchestrator | Sunday 25 May 2025 03:58:48 +0000 (0:00:01.283) 0:10:52.481 ************ 2025-05-25 03:58:59.427784 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.427788 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.427792 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.427796 | orchestrator | 2025-05-25 03:58:59.427800 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-05-25 03:58:59.427804 | orchestrator | Sunday 25 May 2025 03:58:50 +0000 (0:00:01.448) 0:10:53.929 ************ 2025-05-25 03:58:59.427808 | orchestrator | changed: [testbed-node-3] 2025-05-25 03:58:59.427812 | orchestrator | changed: [testbed-node-4] 2025-05-25 03:58:59.427816 | orchestrator | changed: [testbed-node-5] 2025-05-25 03:58:59.427820 | orchestrator | 2025-05-25 03:58:59.427824 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-05-25 03:58:59.427828 | orchestrator | Sunday 25 May 2025 03:58:52 +0000 (0:00:01.683) 0:10:55.613 ************ 2025-05-25 03:58:59.427832 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-25 03:58:59.427839 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-25 03:58:59.427844 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-25 03:58:59.427848 | orchestrator | 2025-05-25 03:58:59.427852 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-25 03:58:59.427856 | orchestrator | Sunday 25 May 2025 03:58:54 +0000 (0:00:02.533) 0:10:58.147 ************ 2025-05-25 03:58:59.427860 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.427864 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.427868 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.427872 | orchestrator | 2025-05-25 03:58:59.427876 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-05-25 03:58:59.427880 | orchestrator | Sunday 25 May 2025 03:58:54 +0000 (0:00:00.337) 0:10:58.484 ************ 2025-05-25 03:58:59.427884 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 03:58:59.427888 | orchestrator | 2025-05-25 03:58:59.427892 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-05-25 03:58:59.427897 | orchestrator | Sunday 25 May 2025 03:58:55 +0000 (0:00:00.537) 0:10:59.022 ************ 2025-05-25 03:58:59.427901 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.427919 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.427923 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.427927 | orchestrator | 2025-05-25 03:58:59.427931 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-05-25 03:58:59.427935 | orchestrator | Sunday 25 May 2025 03:58:56 +0000 (0:00:00.555) 0:10:59.577 ************ 2025-05-25 03:58:59.427939 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.427943 | orchestrator | skipping: [testbed-node-4] 2025-05-25 03:58:59.427947 | orchestrator | skipping: [testbed-node-5] 2025-05-25 03:58:59.427951 | orchestrator | 2025-05-25 03:58:59.427955 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-05-25 03:58:59.427959 | orchestrator | Sunday 25 May 2025 03:58:56 +0000 (0:00:00.329) 0:10:59.907 ************ 2025-05-25 03:58:59.427963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 03:58:59.427967 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 03:58:59.427971 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 03:58:59.427975 | orchestrator | skipping: [testbed-node-3] 2025-05-25 03:58:59.427979 | orchestrator | 2025-05-25 03:58:59.427983 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-05-25 03:58:59.427987 | orchestrator | Sunday 25 May 2025 03:58:56 +0000 (0:00:00.602) 0:11:00.510 ************ 2025-05-25 03:58:59.427991 | orchestrator | ok: [testbed-node-3] 2025-05-25 03:58:59.427995 | orchestrator | ok: [testbed-node-4] 2025-05-25 03:58:59.427999 | orchestrator | ok: [testbed-node-5] 2025-05-25 03:58:59.428003 | orchestrator | 2025-05-25 03:58:59.428007 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:58:59.428012 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-05-25 03:58:59.428016 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-05-25 03:58:59.428020 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-05-25 03:58:59.428024 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-05-25 03:58:59.428028 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-05-25 03:58:59.428035 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-05-25 03:58:59.428039 | orchestrator | 2025-05-25 03:58:59.428043 | orchestrator | 2025-05-25 03:58:59.428047 | orchestrator | 2025-05-25 03:58:59.428051 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:58:59.428055 | orchestrator | Sunday 25 May 2025 03:58:57 +0000 (0:00:00.258) 0:11:00.769 ************ 2025-05-25 03:58:59.428062 | orchestrator | =============================================================================== 2025-05-25 03:58:59.428066 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 71.58s 2025-05-25 03:58:59.428070 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 38.82s 2025-05-25 03:58:59.428074 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.45s 2025-05-25 03:58:59.428078 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 29.96s 2025-05-25 03:58:59.428082 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.77s 2025-05-25 03:58:59.428086 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 13.97s 2025-05-25 03:58:59.428090 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.93s 2025-05-25 03:58:59.428094 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.54s 2025-05-25 03:58:59.428115 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.25s 2025-05-25 03:58:59.428119 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.67s 2025-05-25 03:58:59.428123 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.36s 2025-05-25 03:58:59.428127 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.06s 2025-05-25 03:58:59.428131 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.47s 2025-05-25 03:58:59.428135 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.13s 2025-05-25 03:58:59.428139 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.88s 2025-05-25 03:58:59.428143 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.81s 2025-05-25 03:58:59.428147 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.76s 2025-05-25 03:58:59.428151 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.61s 2025-05-25 03:58:59.428155 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.50s 2025-05-25 03:58:59.428159 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.37s 2025-05-25 03:58:59.428163 | orchestrator | 2025-05-25 03:58:59 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:58:59.428168 | orchestrator | 2025-05-25 03:58:59 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:58:59.428172 | orchestrator | 2025-05-25 03:58:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:02.463536 | orchestrator | 2025-05-25 03:59:02 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:02.464514 | orchestrator | 2025-05-25 03:59:02 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:02.466651 | orchestrator | 2025-05-25 03:59:02 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:02.466707 | orchestrator | 2025-05-25 03:59:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:05.512964 | orchestrator | 2025-05-25 03:59:05 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:05.513777 | orchestrator | 2025-05-25 03:59:05 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:05.515510 | orchestrator | 2025-05-25 03:59:05 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:05.515540 | orchestrator | 2025-05-25 03:59:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:08.566604 | orchestrator | 2025-05-25 03:59:08 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:08.568722 | orchestrator | 2025-05-25 03:59:08 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:08.570792 | orchestrator | 2025-05-25 03:59:08 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:08.570832 | orchestrator | 2025-05-25 03:59:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:11.622201 | orchestrator | 2025-05-25 03:59:11 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:11.622460 | orchestrator | 2025-05-25 03:59:11 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:11.623238 | orchestrator | 2025-05-25 03:59:11 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:11.623276 | orchestrator | 2025-05-25 03:59:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:14.673061 | orchestrator | 2025-05-25 03:59:14 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:14.677012 | orchestrator | 2025-05-25 03:59:14 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:14.678369 | orchestrator | 2025-05-25 03:59:14 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:14.678472 | orchestrator | 2025-05-25 03:59:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:17.724629 | orchestrator | 2025-05-25 03:59:17 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:17.727248 | orchestrator | 2025-05-25 03:59:17 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:17.728779 | orchestrator | 2025-05-25 03:59:17 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:17.729243 | orchestrator | 2025-05-25 03:59:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:20.771007 | orchestrator | 2025-05-25 03:59:20 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:20.771866 | orchestrator | 2025-05-25 03:59:20 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:20.773835 | orchestrator | 2025-05-25 03:59:20 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:20.774160 | orchestrator | 2025-05-25 03:59:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:23.821819 | orchestrator | 2025-05-25 03:59:23 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:23.823878 | orchestrator | 2025-05-25 03:59:23 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:23.826094 | orchestrator | 2025-05-25 03:59:23 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:23.826140 | orchestrator | 2025-05-25 03:59:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:26.880009 | orchestrator | 2025-05-25 03:59:26 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:26.881203 | orchestrator | 2025-05-25 03:59:26 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:26.883875 | orchestrator | 2025-05-25 03:59:26 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:26.884179 | orchestrator | 2025-05-25 03:59:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:29.938564 | orchestrator | 2025-05-25 03:59:29 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:29.939975 | orchestrator | 2025-05-25 03:59:29 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:29.941900 | orchestrator | 2025-05-25 03:59:29 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:29.941929 | orchestrator | 2025-05-25 03:59:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:32.997302 | orchestrator | 2025-05-25 03:59:32 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:32.997401 | orchestrator | 2025-05-25 03:59:32 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:32.998257 | orchestrator | 2025-05-25 03:59:32 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:32.998299 | orchestrator | 2025-05-25 03:59:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:36.060507 | orchestrator | 2025-05-25 03:59:36 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:36.062754 | orchestrator | 2025-05-25 03:59:36 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:36.064340 | orchestrator | 2025-05-25 03:59:36 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:36.064383 | orchestrator | 2025-05-25 03:59:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:39.103348 | orchestrator | 2025-05-25 03:59:39 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:39.106149 | orchestrator | 2025-05-25 03:59:39 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:39.108991 | orchestrator | 2025-05-25 03:59:39 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:39.109325 | orchestrator | 2025-05-25 03:59:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:42.162385 | orchestrator | 2025-05-25 03:59:42 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:42.163732 | orchestrator | 2025-05-25 03:59:42 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:42.165135 | orchestrator | 2025-05-25 03:59:42 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:42.165176 | orchestrator | 2025-05-25 03:59:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:45.207224 | orchestrator | 2025-05-25 03:59:45 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:45.208364 | orchestrator | 2025-05-25 03:59:45 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:45.211703 | orchestrator | 2025-05-25 03:59:45 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:45.212189 | orchestrator | 2025-05-25 03:59:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:48.257348 | orchestrator | 2025-05-25 03:59:48 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:48.257798 | orchestrator | 2025-05-25 03:59:48 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:48.259165 | orchestrator | 2025-05-25 03:59:48 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:48.259233 | orchestrator | 2025-05-25 03:59:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:51.300765 | orchestrator | 2025-05-25 03:59:51 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:51.302514 | orchestrator | 2025-05-25 03:59:51 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:51.304528 | orchestrator | 2025-05-25 03:59:51 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:51.304655 | orchestrator | 2025-05-25 03:59:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:54.356841 | orchestrator | 2025-05-25 03:59:54 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:54.358265 | orchestrator | 2025-05-25 03:59:54 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state STARTED 2025-05-25 03:59:54.359665 | orchestrator | 2025-05-25 03:59:54 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:54.359694 | orchestrator | 2025-05-25 03:59:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 03:59:57.414768 | orchestrator | 2025-05-25 03:59:57 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 03:59:57.420410 | orchestrator | 2025-05-25 03:59:57.420574 | orchestrator | 2025-05-25 03:59:57.420587 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 03:59:57.420599 | orchestrator | 2025-05-25 03:59:57.420611 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 03:59:57.420622 | orchestrator | Sunday 25 May 2025 03:56:55 +0000 (0:00:00.264) 0:00:00.264 ************ 2025-05-25 03:59:57.420633 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:59:57.420645 | orchestrator | ok: [testbed-node-1] 2025-05-25 03:59:57.420656 | orchestrator | ok: [testbed-node-2] 2025-05-25 03:59:57.420667 | orchestrator | 2025-05-25 03:59:57.420678 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 03:59:57.420689 | orchestrator | Sunday 25 May 2025 03:56:55 +0000 (0:00:00.292) 0:00:00.557 ************ 2025-05-25 03:59:57.420718 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-25 03:59:57.420732 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-25 03:59:57.420743 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-25 03:59:57.420755 | orchestrator | 2025-05-25 03:59:57.420766 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-25 03:59:57.420778 | orchestrator | 2025-05-25 03:59:57.420934 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-25 03:59:57.420952 | orchestrator | Sunday 25 May 2025 03:56:56 +0000 (0:00:00.402) 0:00:00.959 ************ 2025-05-25 03:59:57.420963 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:59:57.420975 | orchestrator | 2025-05-25 03:59:57.420987 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-25 03:59:57.420998 | orchestrator | Sunday 25 May 2025 03:56:56 +0000 (0:00:00.465) 0:00:01.425 ************ 2025-05-25 03:59:57.421025 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-25 03:59:57.421037 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-25 03:59:57.421049 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-25 03:59:57.421061 | orchestrator | 2025-05-25 03:59:57.421073 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-25 03:59:57.421084 | orchestrator | Sunday 25 May 2025 03:56:58 +0000 (0:00:01.640) 0:00:03.066 ************ 2025-05-25 03:59:57.421100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 03:59:57.421137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 03:59:57.421164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 03:59:57.421179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 03:59:57.421198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 03:59:57.421226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 03:59:57.421238 | orchestrator | 2025-05-25 03:59:57.421251 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-25 03:59:57.421262 | orchestrator | Sunday 25 May 2025 03:57:00 +0000 (0:00:01.802) 0:00:04.868 ************ 2025-05-25 03:59:57.421274 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:59:57.421292 | orchestrator | 2025-05-25 03:59:57.421310 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-25 03:59:57.421328 | orchestrator | Sunday 25 May 2025 03:57:00 +0000 (0:00:00.507) 0:00:05.375 ************ 2025-05-25 03:59:57.421368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 03:59:57.421393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 03:59:57.421420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 03:59:57.421451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 03:59:57.421481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 03:59:57.421501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 03:59:57.421521 | orchestrator | 2025-05-25 03:59:57.421537 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-25 03:59:57.421573 | orchestrator | Sunday 25 May 2025 03:57:03 +0000 (0:00:02.973) 0:00:08.349 ************ 2025-05-25 03:59:57.421593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-25 03:59:57.421609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-25 03:59:57.421622 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:59:57.421634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-25 03:59:57.421655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-25 03:59:57.421677 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:59:57.421694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-25 03:59:57.421707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-25 03:59:57.421719 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:59:57.421730 | orchestrator | 2025-05-25 03:59:57.421741 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-25 03:59:57.421752 | orchestrator | Sunday 25 May 2025 03:57:04 +0000 (0:00:01.086) 0:00:09.436 ************ 2025-05-25 03:59:57.421763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-25 03:59:57.421783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-25 03:59:57.421802 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:59:57.421818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-25 03:59:57.421830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-25 03:59:57.421842 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:59:57.421853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-25 03:59:57.421900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-25 03:59:57.421920 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:59:57.421931 | orchestrator | 2025-05-25 03:59:57.421942 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-25 03:59:57.421953 | orchestrator | Sunday 25 May 2025 03:57:05 +0000 (0:00:00.969) 0:00:10.405 ************ 2025-05-25 03:59:57.421975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 03:59:57.421988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 03:59:57.422000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 03:59:57.422075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 03:59:57.422105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 03:59:57.422118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 03:59:57.422130 | orchestrator | 2025-05-25 03:59:57.422141 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-25 03:59:57.422152 | orchestrator | Sunday 25 May 2025 03:57:08 +0000 (0:00:02.483) 0:00:12.889 ************ 2025-05-25 03:59:57.422163 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:59:57.422175 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:59:57.422186 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:59:57.422197 | orchestrator | 2025-05-25 03:59:57.422208 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-25 03:59:57.422219 | orchestrator | Sunday 25 May 2025 03:57:11 +0000 (0:00:03.006) 0:00:15.895 ************ 2025-05-25 03:59:57.422230 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:59:57.422241 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:59:57.422252 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:59:57.422262 | orchestrator | 2025-05-25 03:59:57.422273 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-25 03:59:57.422284 | orchestrator | Sunday 25 May 2025 03:57:12 +0000 (0:00:01.657) 0:00:17.553 ************ 2025-05-25 03:59:57.422295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 03:59:57.422321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'co2025-05-25 03:59:57 | INFO  | Task 1ffff23a-f667-4103-988d-40e3be169ada is in state SUCCESS 2025-05-25 03:59:57.422334 | orchestrator | ntainer_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 03:59:57.422352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 03:59:57.422365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 03:59:57.422377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 03:59:57.422405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 03:59:57.422417 | orchestrator | 2025-05-25 03:59:57.422428 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-25 03:59:57.422439 | orchestrator | Sunday 25 May 2025 03:57:14 +0000 (0:00:02.036) 0:00:19.590 ************ 2025-05-25 03:59:57.422450 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:59:57.422461 | orchestrator | skipping: [testbed-node-1] 2025-05-25 03:59:57.422471 | orchestrator | skipping: [testbed-node-2] 2025-05-25 03:59:57.422482 | orchestrator | 2025-05-25 03:59:57.422493 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-25 03:59:57.422504 | orchestrator | Sunday 25 May 2025 03:57:15 +0000 (0:00:00.263) 0:00:19.853 ************ 2025-05-25 03:59:57.422515 | orchestrator | 2025-05-25 03:59:57.422525 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-25 03:59:57.422541 | orchestrator | Sunday 25 May 2025 03:57:15 +0000 (0:00:00.061) 0:00:19.915 ************ 2025-05-25 03:59:57.422552 | orchestrator | 2025-05-25 03:59:57.422563 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-25 03:59:57.422574 | orchestrator | Sunday 25 May 2025 03:57:15 +0000 (0:00:00.061) 0:00:19.976 ************ 2025-05-25 03:59:57.422584 | orchestrator | 2025-05-25 03:59:57.422595 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-25 03:59:57.422605 | orchestrator | Sunday 25 May 2025 03:57:15 +0000 (0:00:00.239) 0:00:20.215 ************ 2025-05-25 03:59:57.422616 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:59:57.422627 | orchestrator | 2025-05-25 03:59:57.422638 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-25 03:59:57.422648 | orchestrator | Sunday 25 May 2025 03:57:15 +0000 (0:00:00.215) 0:00:20.431 ************ 2025-05-25 03:59:57.422659 | orchestrator | skipping: [testbed-node-0] 2025-05-25 03:59:57.422670 | orchestrator | 2025-05-25 03:59:57.422681 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-25 03:59:57.422691 | orchestrator | Sunday 25 May 2025 03:57:15 +0000 (0:00:00.198) 0:00:20.629 ************ 2025-05-25 03:59:57.422702 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:59:57.422713 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:59:57.422723 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:59:57.422734 | orchestrator | 2025-05-25 03:59:57.422745 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-25 03:59:57.422755 | orchestrator | Sunday 25 May 2025 03:58:29 +0000 (0:01:13.850) 0:01:34.480 ************ 2025-05-25 03:59:57.422766 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:59:57.422777 | orchestrator | changed: [testbed-node-2] 2025-05-25 03:59:57.422787 | orchestrator | changed: [testbed-node-1] 2025-05-25 03:59:57.422798 | orchestrator | 2025-05-25 03:59:57.422809 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-25 03:59:57.422819 | orchestrator | Sunday 25 May 2025 03:59:46 +0000 (0:01:16.457) 0:02:50.937 ************ 2025-05-25 03:59:57.422837 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 03:59:57.422848 | orchestrator | 2025-05-25 03:59:57.422859 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-25 03:59:57.422888 | orchestrator | Sunday 25 May 2025 03:59:46 +0000 (0:00:00.683) 0:02:51.621 ************ 2025-05-25 03:59:57.422899 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:59:57.422910 | orchestrator | 2025-05-25 03:59:57.422921 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-25 03:59:57.422932 | orchestrator | Sunday 25 May 2025 03:59:48 +0000 (0:00:02.211) 0:02:53.832 ************ 2025-05-25 03:59:57.422943 | orchestrator | ok: [testbed-node-0] 2025-05-25 03:59:57.422954 | orchestrator | 2025-05-25 03:59:57.422965 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-25 03:59:57.422975 | orchestrator | Sunday 25 May 2025 03:59:51 +0000 (0:00:02.266) 0:02:56.099 ************ 2025-05-25 03:59:57.422986 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:59:57.422997 | orchestrator | 2025-05-25 03:59:57.423008 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-25 03:59:57.423019 | orchestrator | Sunday 25 May 2025 03:59:53 +0000 (0:00:02.553) 0:02:58.653 ************ 2025-05-25 03:59:57.423029 | orchestrator | changed: [testbed-node-0] 2025-05-25 03:59:57.423040 | orchestrator | 2025-05-25 03:59:57.423051 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 03:59:57.423063 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 03:59:57.423074 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 03:59:57.423093 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 03:59:57.423104 | orchestrator | 2025-05-25 03:59:57.423116 | orchestrator | 2025-05-25 03:59:57.423126 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 03:59:57.423137 | orchestrator | Sunday 25 May 2025 03:59:56 +0000 (0:00:02.432) 0:03:01.085 ************ 2025-05-25 03:59:57.423148 | orchestrator | =============================================================================== 2025-05-25 03:59:57.423159 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 76.46s 2025-05-25 03:59:57.423170 | orchestrator | opensearch : Restart opensearch container ------------------------------ 73.85s 2025-05-25 03:59:57.423181 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.01s 2025-05-25 03:59:57.423192 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.97s 2025-05-25 03:59:57.423202 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.55s 2025-05-25 03:59:57.423213 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.48s 2025-05-25 03:59:57.423224 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.43s 2025-05-25 03:59:57.423235 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.27s 2025-05-25 03:59:57.423246 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.21s 2025-05-25 03:59:57.423257 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.04s 2025-05-25 03:59:57.423267 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.80s 2025-05-25 03:59:57.423278 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.66s 2025-05-25 03:59:57.423294 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.64s 2025-05-25 03:59:57.423306 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.09s 2025-05-25 03:59:57.423327 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.97s 2025-05-25 03:59:57.423338 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.68s 2025-05-25 03:59:57.423349 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2025-05-25 03:59:57.423359 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2025-05-25 03:59:57.423370 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2025-05-25 03:59:57.423381 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.36s 2025-05-25 03:59:57.423392 | orchestrator | 2025-05-25 03:59:57 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 03:59:57.423403 | orchestrator | 2025-05-25 03:59:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:00.470429 | orchestrator | 2025-05-25 04:00:00 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state STARTED 2025-05-25 04:00:00.471527 | orchestrator | 2025-05-25 04:00:00 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:00.471564 | orchestrator | 2025-05-25 04:00:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:03.515847 | orchestrator | 2025-05-25 04:00:03 | INFO  | Task ff96587e-20b9-4298-a3c1-3571b1e2dcb3 is in state SUCCESS 2025-05-25 04:00:03.520362 | orchestrator | 2025-05-25 04:00:03.520539 | orchestrator | 2025-05-25 04:00:03.520554 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-25 04:00:03.520565 | orchestrator | 2025-05-25 04:00:03.520577 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-25 04:00:03.520589 | orchestrator | Sunday 25 May 2025 03:56:55 +0000 (0:00:00.118) 0:00:00.118 ************ 2025-05-25 04:00:03.520816 | orchestrator | ok: [localhost] => { 2025-05-25 04:00:03.520836 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-25 04:00:03.520848 | orchestrator | } 2025-05-25 04:00:03.520860 | orchestrator | 2025-05-25 04:00:03.520980 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-25 04:00:03.520993 | orchestrator | Sunday 25 May 2025 03:56:55 +0000 (0:00:00.037) 0:00:00.156 ************ 2025-05-25 04:00:03.521005 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-25 04:00:03.521018 | orchestrator | ...ignoring 2025-05-25 04:00:03.521029 | orchestrator | 2025-05-25 04:00:03.521040 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-25 04:00:03.521051 | orchestrator | Sunday 25 May 2025 03:56:58 +0000 (0:00:02.870) 0:00:03.027 ************ 2025-05-25 04:00:03.521062 | orchestrator | skipping: [localhost] 2025-05-25 04:00:03.521073 | orchestrator | 2025-05-25 04:00:03.521084 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-25 04:00:03.521095 | orchestrator | Sunday 25 May 2025 03:56:58 +0000 (0:00:00.105) 0:00:03.133 ************ 2025-05-25 04:00:03.521105 | orchestrator | ok: [localhost] 2025-05-25 04:00:03.521116 | orchestrator | 2025-05-25 04:00:03.521127 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:00:03.521137 | orchestrator | 2025-05-25 04:00:03.521148 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:00:03.521159 | orchestrator | Sunday 25 May 2025 03:56:58 +0000 (0:00:00.227) 0:00:03.360 ************ 2025-05-25 04:00:03.521170 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:00:03.521181 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:00:03.521191 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:00:03.521274 | orchestrator | 2025-05-25 04:00:03.521285 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:00:03.521296 | orchestrator | Sunday 25 May 2025 03:56:59 +0000 (0:00:00.494) 0:00:03.854 ************ 2025-05-25 04:00:03.521334 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-25 04:00:03.521396 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-25 04:00:03.521408 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-25 04:00:03.521419 | orchestrator | 2025-05-25 04:00:03.521429 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-25 04:00:03.521441 | orchestrator | 2025-05-25 04:00:03.521455 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-25 04:00:03.521468 | orchestrator | Sunday 25 May 2025 03:56:59 +0000 (0:00:00.545) 0:00:04.400 ************ 2025-05-25 04:00:03.521481 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 04:00:03.521495 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-25 04:00:03.521508 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-25 04:00:03.521521 | orchestrator | 2025-05-25 04:00:03.521570 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-25 04:00:03.521584 | orchestrator | Sunday 25 May 2025 03:56:59 +0000 (0:00:00.360) 0:00:04.761 ************ 2025-05-25 04:00:03.521597 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:00:03.521611 | orchestrator | 2025-05-25 04:00:03.521624 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-25 04:00:03.521637 | orchestrator | Sunday 25 May 2025 03:57:00 +0000 (0:00:00.532) 0:00:05.293 ************ 2025-05-25 04:00:03.521688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-25 04:00:03.521709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-25 04:00:03.521741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-25 04:00:03.521755 | orchestrator | 2025-05-25 04:00:03.521776 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-25 04:00:03.521790 | orchestrator | Sunday 25 May 2025 03:57:04 +0000 (0:00:03.504) 0:00:08.798 ************ 2025-05-25 04:00:03.521802 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:00:03.521814 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.521825 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.521836 | orchestrator | 2025-05-25 04:00:03.521846 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-25 04:00:03.521857 | orchestrator | Sunday 25 May 2025 03:57:04 +0000 (0:00:00.701) 0:00:09.499 ************ 2025-05-25 04:00:03.521897 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.521910 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.521921 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:00:03.521932 | orchestrator | 2025-05-25 04:00:03.521942 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-25 04:00:03.521953 | orchestrator | Sunday 25 May 2025 03:57:06 +0000 (0:00:01.676) 0:00:11.175 ************ 2025-05-25 04:00:03.521974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-25 04:00:03.522000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-25 04:00:03.522014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-25 04:00:03.522083 | orchestrator | 2025-05-25 04:00:03.522095 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-25 04:00:03.522106 | orchestrator | Sunday 25 May 2025 03:57:10 +0000 (0:00:03.935) 0:00:15.111 ************ 2025-05-25 04:00:03.522116 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.522127 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.522138 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:00:03.522149 | orchestrator | 2025-05-25 04:00:03.522160 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-25 04:00:03.522171 | orchestrator | Sunday 25 May 2025 03:57:11 +0000 (0:00:01.047) 0:00:16.158 ************ 2025-05-25 04:00:03.522181 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:00:03.522192 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:00:03.522203 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:00:03.522214 | orchestrator | 2025-05-25 04:00:03.522225 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-25 04:00:03.522236 | orchestrator | Sunday 25 May 2025 03:57:15 +0000 (0:00:03.976) 0:00:20.135 ************ 2025-05-25 04:00:03.522247 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:00:03.522258 | orchestrator | 2025-05-25 04:00:03.522274 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-05-25 04:00:03.522285 | orchestrator | Sunday 25 May 2025 03:57:15 +0000 (0:00:00.495) 0:00:20.631 ************ 2025-05-25 04:00:03.522307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 04:00:03.522327 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:00:03.522340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 04:00:03.522352 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.522376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 04:00:03.522396 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.522407 | orchestrator | 2025-05-25 04:00:03.522418 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-05-25 04:00:03.522429 | orchestrator | Sunday 25 May 2025 03:57:19 +0000 (0:00:03.220) 0:00:23.851 ************ 2025-05-25 04:00:03.522440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 04:00:03.522452 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:00:03.522474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 04:00:03.522494 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.522505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 04:00:03.522517 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.522528 | orchestrator | 2025-05-25 04:00:03.522539 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-05-25 04:00:03.522550 | orchestrator | Sunday 25 May 2025 03:57:21 +0000 (0:00:02.688) 0:00:26.540 ************ 2025-05-25 04:00:03.522567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 04:00:03.522592 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:00:03.522653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 04:00:03.522697 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.522716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 04:00:03.522737 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.522748 | orchestrator | 2025-05-25 04:00:03.522759 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-25 04:00:03.522770 | orchestrator | Sunday 25 May 2025 03:57:24 +0000 (0:00:03.011) 0:00:29.552 ************ 2025-05-25 04:00:03.522791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-25 04:00:03.522809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-25 04:00:03.522837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-25 04:00:03.522850 | orchestrator | 2025-05-25 04:00:03.522861 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-25 04:00:03.522902 | orchestrator | Sunday 25 May 2025 03:57:28 +0000 (0:00:03.690) 0:00:33.242 ************ 2025-05-25 04:00:03.522921 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:00:03.522940 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:00:03.522958 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:00:03.522973 | orchestrator | 2025-05-25 04:00:03.522985 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-25 04:00:03.522995 | orchestrator | Sunday 25 May 2025 03:57:29 +0000 (0:00:01.130) 0:00:34.372 ************ 2025-05-25 04:00:03.523006 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:00:03.523018 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:00:03.523028 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:00:03.523039 | orchestrator | 2025-05-25 04:00:03.523050 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-25 04:00:03.523060 | orchestrator | Sunday 25 May 2025 03:57:29 +0000 (0:00:00.378) 0:00:34.751 ************ 2025-05-25 04:00:03.523071 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:00:03.523082 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:00:03.523093 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:00:03.523103 | orchestrator | 2025-05-25 04:00:03.523114 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-25 04:00:03.523125 | orchestrator | Sunday 25 May 2025 03:57:30 +0000 (0:00:00.423) 0:00:35.175 ************ 2025-05-25 04:00:03.523136 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-25 04:00:03.523147 | orchestrator | ...ignoring 2025-05-25 04:00:03.523159 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-25 04:00:03.523170 | orchestrator | ...ignoring 2025-05-25 04:00:03.523180 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-25 04:00:03.523199 | orchestrator | ...ignoring 2025-05-25 04:00:03.523209 | orchestrator | 2025-05-25 04:00:03.523220 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-25 04:00:03.523231 | orchestrator | Sunday 25 May 2025 03:57:41 +0000 (0:00:10.898) 0:00:46.073 ************ 2025-05-25 04:00:03.523241 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:00:03.523252 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:00:03.523263 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:00:03.523273 | orchestrator | 2025-05-25 04:00:03.523290 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-25 04:00:03.523301 | orchestrator | Sunday 25 May 2025 03:57:41 +0000 (0:00:00.611) 0:00:46.685 ************ 2025-05-25 04:00:03.523311 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:00:03.523322 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.523333 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.523344 | orchestrator | 2025-05-25 04:00:03.523354 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-25 04:00:03.523365 | orchestrator | Sunday 25 May 2025 03:57:42 +0000 (0:00:00.437) 0:00:47.123 ************ 2025-05-25 04:00:03.523376 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:00:03.523386 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.523397 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.523408 | orchestrator | 2025-05-25 04:00:03.523419 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-25 04:00:03.523429 | orchestrator | Sunday 25 May 2025 03:57:42 +0000 (0:00:00.397) 0:00:47.520 ************ 2025-05-25 04:00:03.523440 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:00:03.523451 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.523462 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.523472 | orchestrator | 2025-05-25 04:00:03.523483 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-25 04:00:03.523494 | orchestrator | Sunday 25 May 2025 03:57:43 +0000 (0:00:00.406) 0:00:47.927 ************ 2025-05-25 04:00:03.523504 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:00:03.523515 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:00:03.523526 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:00:03.523536 | orchestrator | 2025-05-25 04:00:03.523547 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-25 04:00:03.523558 | orchestrator | Sunday 25 May 2025 03:57:43 +0000 (0:00:00.635) 0:00:48.563 ************ 2025-05-25 04:00:03.523575 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:00:03.523586 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.523597 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.523608 | orchestrator | 2025-05-25 04:00:03.523619 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-25 04:00:03.523630 | orchestrator | Sunday 25 May 2025 03:57:44 +0000 (0:00:00.416) 0:00:48.979 ************ 2025-05-25 04:00:03.523640 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.523651 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.523662 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-25 04:00:03.523673 | orchestrator | 2025-05-25 04:00:03.523683 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-25 04:00:03.523694 | orchestrator | Sunday 25 May 2025 03:57:44 +0000 (0:00:00.373) 0:00:49.352 ************ 2025-05-25 04:00:03.523705 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:00:03.523715 | orchestrator | 2025-05-25 04:00:03.523726 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-25 04:00:03.523737 | orchestrator | Sunday 25 May 2025 03:57:54 +0000 (0:00:09.716) 0:00:59.069 ************ 2025-05-25 04:00:03.523747 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:00:03.523758 | orchestrator | 2025-05-25 04:00:03.523769 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-25 04:00:03.523788 | orchestrator | Sunday 25 May 2025 03:57:54 +0000 (0:00:00.141) 0:00:59.211 ************ 2025-05-25 04:00:03.523799 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:00:03.523810 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.523820 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.523831 | orchestrator | 2025-05-25 04:00:03.523842 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-25 04:00:03.523853 | orchestrator | Sunday 25 May 2025 03:57:55 +0000 (0:00:01.050) 0:01:00.262 ************ 2025-05-25 04:00:03.523927 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:00:03.523942 | orchestrator | 2025-05-25 04:00:03.523953 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-25 04:00:03.523964 | orchestrator | Sunday 25 May 2025 03:58:03 +0000 (0:00:07.600) 0:01:07.862 ************ 2025-05-25 04:00:03.523975 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:00:03.523985 | orchestrator | 2025-05-25 04:00:03.523996 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-25 04:00:03.524007 | orchestrator | Sunday 25 May 2025 03:58:04 +0000 (0:00:01.514) 0:01:09.377 ************ 2025-05-25 04:00:03.524018 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:00:03.524029 | orchestrator | 2025-05-25 04:00:03.524039 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-25 04:00:03.524050 | orchestrator | Sunday 25 May 2025 03:58:07 +0000 (0:00:02.454) 0:01:11.832 ************ 2025-05-25 04:00:03.524061 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:00:03.524072 | orchestrator | 2025-05-25 04:00:03.524083 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-25 04:00:03.524094 | orchestrator | Sunday 25 May 2025 03:58:07 +0000 (0:00:00.106) 0:01:11.938 ************ 2025-05-25 04:00:03.524104 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:00:03.524115 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.524126 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.524137 | orchestrator | 2025-05-25 04:00:03.524147 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-25 04:00:03.524158 | orchestrator | Sunday 25 May 2025 03:58:07 +0000 (0:00:00.466) 0:01:12.404 ************ 2025-05-25 04:00:03.524169 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:00:03.524180 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-25 04:00:03.524190 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:00:03.524201 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:00:03.524212 | orchestrator | 2025-05-25 04:00:03.524222 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-25 04:00:03.524233 | orchestrator | skipping: no hosts matched 2025-05-25 04:00:03.524244 | orchestrator | 2025-05-25 04:00:03.524255 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-25 04:00:03.524265 | orchestrator | 2025-05-25 04:00:03.524276 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-25 04:00:03.524292 | orchestrator | Sunday 25 May 2025 03:58:07 +0000 (0:00:00.353) 0:01:12.758 ************ 2025-05-25 04:00:03.524303 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:00:03.524314 | orchestrator | 2025-05-25 04:00:03.524325 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-25 04:00:03.524336 | orchestrator | Sunday 25 May 2025 03:58:27 +0000 (0:00:19.287) 0:01:32.045 ************ 2025-05-25 04:00:03.524346 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:00:03.524357 | orchestrator | 2025-05-25 04:00:03.524368 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-25 04:00:03.524379 | orchestrator | Sunday 25 May 2025 03:58:47 +0000 (0:00:20.551) 0:01:52.596 ************ 2025-05-25 04:00:03.524389 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:00:03.524400 | orchestrator | 2025-05-25 04:00:03.524411 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-25 04:00:03.524421 | orchestrator | 2025-05-25 04:00:03.524432 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-25 04:00:03.524450 | orchestrator | Sunday 25 May 2025 03:58:50 +0000 (0:00:02.483) 0:01:55.079 ************ 2025-05-25 04:00:03.524461 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:00:03.524472 | orchestrator | 2025-05-25 04:00:03.524482 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-25 04:00:03.524493 | orchestrator | Sunday 25 May 2025 03:59:09 +0000 (0:00:19.025) 0:02:14.105 ************ 2025-05-25 04:00:03.524504 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:00:03.524514 | orchestrator | 2025-05-25 04:00:03.524525 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-25 04:00:03.524536 | orchestrator | Sunday 25 May 2025 03:59:29 +0000 (0:00:20.546) 0:02:34.652 ************ 2025-05-25 04:00:03.524546 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:00:03.524557 | orchestrator | 2025-05-25 04:00:03.524567 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-25 04:00:03.524578 | orchestrator | 2025-05-25 04:00:03.524595 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-25 04:00:03.524607 | orchestrator | Sunday 25 May 2025 03:59:32 +0000 (0:00:02.595) 0:02:37.247 ************ 2025-05-25 04:00:03.524617 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:00:03.524628 | orchestrator | 2025-05-25 04:00:03.524639 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-25 04:00:03.524649 | orchestrator | Sunday 25 May 2025 03:59:42 +0000 (0:00:10.204) 0:02:47.451 ************ 2025-05-25 04:00:03.524660 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:00:03.524671 | orchestrator | 2025-05-25 04:00:03.524682 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-25 04:00:03.524692 | orchestrator | Sunday 25 May 2025 03:59:48 +0000 (0:00:05.527) 0:02:52.979 ************ 2025-05-25 04:00:03.524703 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:00:03.524714 | orchestrator | 2025-05-25 04:00:03.524724 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-25 04:00:03.524735 | orchestrator | 2025-05-25 04:00:03.524746 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-25 04:00:03.524757 | orchestrator | Sunday 25 May 2025 03:59:50 +0000 (0:00:02.346) 0:02:55.325 ************ 2025-05-25 04:00:03.524767 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:00:03.524778 | orchestrator | 2025-05-25 04:00:03.524789 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-25 04:00:03.524799 | orchestrator | Sunday 25 May 2025 03:59:51 +0000 (0:00:00.514) 0:02:55.839 ************ 2025-05-25 04:00:03.524810 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.524821 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.524832 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:00:03.524842 | orchestrator | 2025-05-25 04:00:03.524853 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-25 04:00:03.524886 | orchestrator | Sunday 25 May 2025 03:59:53 +0000 (0:00:02.361) 0:02:58.200 ************ 2025-05-25 04:00:03.524899 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.524910 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.524920 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:00:03.524931 | orchestrator | 2025-05-25 04:00:03.524942 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-25 04:00:03.524952 | orchestrator | Sunday 25 May 2025 03:59:55 +0000 (0:00:02.087) 0:03:00.288 ************ 2025-05-25 04:00:03.524963 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.524974 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.524985 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:00:03.524995 | orchestrator | 2025-05-25 04:00:03.525006 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-25 04:00:03.525017 | orchestrator | Sunday 25 May 2025 03:59:57 +0000 (0:00:02.084) 0:03:02.372 ************ 2025-05-25 04:00:03.525115 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.525136 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.525147 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:00:03.525158 | orchestrator | 2025-05-25 04:00:03.525169 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-25 04:00:03.525180 | orchestrator | Sunday 25 May 2025 03:59:59 +0000 (0:00:02.029) 0:03:04.401 ************ 2025-05-25 04:00:03.525190 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:00:03.525201 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:00:03.525212 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:00:03.525223 | orchestrator | 2025-05-25 04:00:03.525234 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-25 04:00:03.525245 | orchestrator | Sunday 25 May 2025 04:00:02 +0000 (0:00:02.838) 0:03:07.239 ************ 2025-05-25 04:00:03.525255 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:00:03.525266 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:00:03.525277 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:00:03.525288 | orchestrator | 2025-05-25 04:00:03.525298 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:00:03.525309 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-25 04:00:03.525326 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-05-25 04:00:03.525339 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-05-25 04:00:03.525350 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-05-25 04:00:03.525361 | orchestrator | 2025-05-25 04:00:03.525372 | orchestrator | 2025-05-25 04:00:03.525382 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:00:03.525393 | orchestrator | Sunday 25 May 2025 04:00:02 +0000 (0:00:00.229) 0:03:07.468 ************ 2025-05-25 04:00:03.525404 | orchestrator | =============================================================================== 2025-05-25 04:00:03.525415 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.10s 2025-05-25 04:00:03.525425 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.31s 2025-05-25 04:00:03.525436 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.90s 2025-05-25 04:00:03.525447 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.20s 2025-05-25 04:00:03.525458 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.72s 2025-05-25 04:00:03.525469 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.60s 2025-05-25 04:00:03.525486 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.53s 2025-05-25 04:00:03.525497 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.08s 2025-05-25 04:00:03.525508 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.98s 2025-05-25 04:00:03.525519 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.94s 2025-05-25 04:00:03.525530 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.69s 2025-05-25 04:00:03.525541 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.50s 2025-05-25 04:00:03.525551 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.22s 2025-05-25 04:00:03.525562 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.01s 2025-05-25 04:00:03.525573 | orchestrator | Check MariaDB service --------------------------------------------------- 2.87s 2025-05-25 04:00:03.525584 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.84s 2025-05-25 04:00:03.525601 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.69s 2025-05-25 04:00:03.525612 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.45s 2025-05-25 04:00:03.525623 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.36s 2025-05-25 04:00:03.525633 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.35s 2025-05-25 04:00:03.525644 | orchestrator | 2025-05-25 04:00:03 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:03.525655 | orchestrator | 2025-05-25 04:00:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:06.585860 | orchestrator | 2025-05-25 04:00:06 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:06.589439 | orchestrator | 2025-05-25 04:00:06 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:06.595029 | orchestrator | 2025-05-25 04:00:06 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:06.595097 | orchestrator | 2025-05-25 04:00:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:09.637306 | orchestrator | 2025-05-25 04:00:09 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:09.638570 | orchestrator | 2025-05-25 04:00:09 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:09.641534 | orchestrator | 2025-05-25 04:00:09 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:09.641584 | orchestrator | 2025-05-25 04:00:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:12.686498 | orchestrator | 2025-05-25 04:00:12 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:12.686719 | orchestrator | 2025-05-25 04:00:12 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:12.688397 | orchestrator | 2025-05-25 04:00:12 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:12.688440 | orchestrator | 2025-05-25 04:00:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:15.739294 | orchestrator | 2025-05-25 04:00:15 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:15.743052 | orchestrator | 2025-05-25 04:00:15 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:15.743090 | orchestrator | 2025-05-25 04:00:15 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:15.743103 | orchestrator | 2025-05-25 04:00:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:18.783287 | orchestrator | 2025-05-25 04:00:18 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:18.784432 | orchestrator | 2025-05-25 04:00:18 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:18.785148 | orchestrator | 2025-05-25 04:00:18 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:18.785258 | orchestrator | 2025-05-25 04:00:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:21.826002 | orchestrator | 2025-05-25 04:00:21 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:21.827086 | orchestrator | 2025-05-25 04:00:21 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:21.829159 | orchestrator | 2025-05-25 04:00:21 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:21.829208 | orchestrator | 2025-05-25 04:00:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:24.880212 | orchestrator | 2025-05-25 04:00:24 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:24.880322 | orchestrator | 2025-05-25 04:00:24 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:24.882767 | orchestrator | 2025-05-25 04:00:24 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:24.882845 | orchestrator | 2025-05-25 04:00:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:27.930466 | orchestrator | 2025-05-25 04:00:27 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:27.931581 | orchestrator | 2025-05-25 04:00:27 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:27.932847 | orchestrator | 2025-05-25 04:00:27 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:27.933015 | orchestrator | 2025-05-25 04:00:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:30.969638 | orchestrator | 2025-05-25 04:00:30 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:30.970396 | orchestrator | 2025-05-25 04:00:30 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:30.970619 | orchestrator | 2025-05-25 04:00:30 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:30.970642 | orchestrator | 2025-05-25 04:00:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:34.015530 | orchestrator | 2025-05-25 04:00:34 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:34.019435 | orchestrator | 2025-05-25 04:00:34 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:34.022586 | orchestrator | 2025-05-25 04:00:34 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:34.022620 | orchestrator | 2025-05-25 04:00:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:37.072992 | orchestrator | 2025-05-25 04:00:37 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:37.073922 | orchestrator | 2025-05-25 04:00:37 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:37.075361 | orchestrator | 2025-05-25 04:00:37 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:37.075444 | orchestrator | 2025-05-25 04:00:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:40.127422 | orchestrator | 2025-05-25 04:00:40 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:40.128743 | orchestrator | 2025-05-25 04:00:40 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:40.130583 | orchestrator | 2025-05-25 04:00:40 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:40.130796 | orchestrator | 2025-05-25 04:00:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:43.181167 | orchestrator | 2025-05-25 04:00:43 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:43.188502 | orchestrator | 2025-05-25 04:00:43 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:43.191068 | orchestrator | 2025-05-25 04:00:43 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:43.191108 | orchestrator | 2025-05-25 04:00:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:46.240019 | orchestrator | 2025-05-25 04:00:46 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:46.241712 | orchestrator | 2025-05-25 04:00:46 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:46.243913 | orchestrator | 2025-05-25 04:00:46 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:46.243975 | orchestrator | 2025-05-25 04:00:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:49.295528 | orchestrator | 2025-05-25 04:00:49 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:49.297274 | orchestrator | 2025-05-25 04:00:49 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:49.299157 | orchestrator | 2025-05-25 04:00:49 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:49.299207 | orchestrator | 2025-05-25 04:00:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:52.344504 | orchestrator | 2025-05-25 04:00:52 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:52.345155 | orchestrator | 2025-05-25 04:00:52 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:52.346299 | orchestrator | 2025-05-25 04:00:52 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:52.346331 | orchestrator | 2025-05-25 04:00:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:55.406189 | orchestrator | 2025-05-25 04:00:55 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:55.408153 | orchestrator | 2025-05-25 04:00:55 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:55.409245 | orchestrator | 2025-05-25 04:00:55 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:55.409273 | orchestrator | 2025-05-25 04:00:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:00:58.467374 | orchestrator | 2025-05-25 04:00:58 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:00:58.469695 | orchestrator | 2025-05-25 04:00:58 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:00:58.471558 | orchestrator | 2025-05-25 04:00:58 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:00:58.471597 | orchestrator | 2025-05-25 04:00:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:01.514668 | orchestrator | 2025-05-25 04:01:01 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:01.515607 | orchestrator | 2025-05-25 04:01:01 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:01:01.517544 | orchestrator | 2025-05-25 04:01:01 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:01:01.517601 | orchestrator | 2025-05-25 04:01:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:04.569275 | orchestrator | 2025-05-25 04:01:04 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:04.571475 | orchestrator | 2025-05-25 04:01:04 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:01:04.574177 | orchestrator | 2025-05-25 04:01:04 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state STARTED 2025-05-25 04:01:04.574458 | orchestrator | 2025-05-25 04:01:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:07.624166 | orchestrator | 2025-05-25 04:01:07 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:07.625100 | orchestrator | 2025-05-25 04:01:07 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:01:07.628584 | orchestrator | 2025-05-25 04:01:07 | INFO  | Task 11e16315-c41a-431d-9d8d-20890955f9ed is in state SUCCESS 2025-05-25 04:01:07.631945 | orchestrator | 2025-05-25 04:01:07.632282 | orchestrator | 2025-05-25 04:01:07.632302 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-25 04:01:07.632315 | orchestrator | 2025-05-25 04:01:07.632326 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-05-25 04:01:07.632339 | orchestrator | Sunday 25 May 2025 03:59:02 +0000 (0:00:00.706) 0:00:00.706 ************ 2025-05-25 04:01:07.632541 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 04:01:07.632880 | orchestrator | 2025-05-25 04:01:07.632903 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-05-25 04:01:07.632922 | orchestrator | Sunday 25 May 2025 03:59:03 +0000 (0:00:00.599) 0:00:01.306 ************ 2025-05-25 04:01:07.632939 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:01:07.632959 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:01:07.632976 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:01:07.632995 | orchestrator | 2025-05-25 04:01:07.633015 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-05-25 04:01:07.633033 | orchestrator | Sunday 25 May 2025 03:59:03 +0000 (0:00:00.608) 0:00:01.914 ************ 2025-05-25 04:01:07.633051 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:01:07.633063 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:01:07.633074 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:01:07.633085 | orchestrator | 2025-05-25 04:01:07.633096 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-05-25 04:01:07.633107 | orchestrator | Sunday 25 May 2025 03:59:04 +0000 (0:00:00.258) 0:00:02.173 ************ 2025-05-25 04:01:07.633118 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:01:07.633128 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:01:07.633139 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:01:07.633150 | orchestrator | 2025-05-25 04:01:07.633161 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-05-25 04:01:07.633171 | orchestrator | Sunday 25 May 2025 03:59:04 +0000 (0:00:00.738) 0:00:02.911 ************ 2025-05-25 04:01:07.633182 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:01:07.633193 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:01:07.633203 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:01:07.633214 | orchestrator | 2025-05-25 04:01:07.633225 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-05-25 04:01:07.633236 | orchestrator | Sunday 25 May 2025 03:59:05 +0000 (0:00:00.313) 0:00:03.225 ************ 2025-05-25 04:01:07.633246 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:01:07.633258 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:01:07.633269 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:01:07.633279 | orchestrator | 2025-05-25 04:01:07.633290 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-05-25 04:01:07.633301 | orchestrator | Sunday 25 May 2025 03:59:05 +0000 (0:00:00.304) 0:00:03.530 ************ 2025-05-25 04:01:07.633312 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:01:07.633323 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:01:07.633333 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:01:07.633344 | orchestrator | 2025-05-25 04:01:07.633355 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-05-25 04:01:07.633366 | orchestrator | Sunday 25 May 2025 03:59:05 +0000 (0:00:00.302) 0:00:03.832 ************ 2025-05-25 04:01:07.633377 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.633389 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.633399 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.633410 | orchestrator | 2025-05-25 04:01:07.633421 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-05-25 04:01:07.633453 | orchestrator | Sunday 25 May 2025 03:59:06 +0000 (0:00:00.451) 0:00:04.284 ************ 2025-05-25 04:01:07.633466 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:01:07.633478 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:01:07.633491 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:01:07.633503 | orchestrator | 2025-05-25 04:01:07.633516 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-25 04:01:07.633528 | orchestrator | Sunday 25 May 2025 03:59:06 +0000 (0:00:00.288) 0:00:04.573 ************ 2025-05-25 04:01:07.633542 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-25 04:01:07.633554 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 04:01:07.633567 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 04:01:07.633580 | orchestrator | 2025-05-25 04:01:07.633592 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-05-25 04:01:07.633605 | orchestrator | Sunday 25 May 2025 03:59:07 +0000 (0:00:00.636) 0:00:05.209 ************ 2025-05-25 04:01:07.633617 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:01:07.633629 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:01:07.633642 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:01:07.633654 | orchestrator | 2025-05-25 04:01:07.633668 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-05-25 04:01:07.633680 | orchestrator | Sunday 25 May 2025 03:59:07 +0000 (0:00:00.409) 0:00:05.619 ************ 2025-05-25 04:01:07.633693 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-25 04:01:07.633706 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 04:01:07.633718 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 04:01:07.633731 | orchestrator | 2025-05-25 04:01:07.633743 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-05-25 04:01:07.633755 | orchestrator | Sunday 25 May 2025 03:59:09 +0000 (0:00:02.058) 0:00:07.677 ************ 2025-05-25 04:01:07.633768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-25 04:01:07.633781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-25 04:01:07.633794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-25 04:01:07.633805 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.633816 | orchestrator | 2025-05-25 04:01:07.633880 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-05-25 04:01:07.633957 | orchestrator | Sunday 25 May 2025 03:59:09 +0000 (0:00:00.380) 0:00:08.057 ************ 2025-05-25 04:01:07.633981 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.633996 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.634008 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.634069 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.634088 | orchestrator | 2025-05-25 04:01:07.634106 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-05-25 04:01:07.634151 | orchestrator | Sunday 25 May 2025 03:59:10 +0000 (0:00:00.750) 0:00:08.807 ************ 2025-05-25 04:01:07.634173 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.634210 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.634229 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.634247 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.634264 | orchestrator | 2025-05-25 04:01:07.634282 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-05-25 04:01:07.634301 | orchestrator | Sunday 25 May 2025 03:59:10 +0000 (0:00:00.140) 0:00:08.948 ************ 2025-05-25 04:01:07.634321 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f9e65bfc28a6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-25 03:59:08.087632', 'end': '2025-05-25 03:59:08.172278', 'delta': '0:00:00.084646', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f9e65bfc28a6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-25 04:01:07.634345 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'adc79498cd02', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-25 03:59:08.811519', 'end': '2025-05-25 03:59:08.854934', 'delta': '0:00:00.043415', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['adc79498cd02'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-25 04:01:07.634428 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '7642e6248279', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-25 03:59:09.352187', 'end': '2025-05-25 03:59:09.395492', 'delta': '0:00:00.043305', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7642e6248279'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-25 04:01:07.634444 | orchestrator | 2025-05-25 04:01:07.634456 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-05-25 04:01:07.634468 | orchestrator | Sunday 25 May 2025 03:59:11 +0000 (0:00:00.358) 0:00:09.306 ************ 2025-05-25 04:01:07.634489 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:01:07.634500 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:01:07.634512 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:01:07.634523 | orchestrator | 2025-05-25 04:01:07.634535 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-05-25 04:01:07.634546 | orchestrator | Sunday 25 May 2025 03:59:11 +0000 (0:00:00.437) 0:00:09.744 ************ 2025-05-25 04:01:07.634557 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-25 04:01:07.634569 | orchestrator | 2025-05-25 04:01:07.634580 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-05-25 04:01:07.634591 | orchestrator | Sunday 25 May 2025 03:59:13 +0000 (0:00:01.574) 0:00:11.319 ************ 2025-05-25 04:01:07.634602 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.634614 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.634626 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.634637 | orchestrator | 2025-05-25 04:01:07.634648 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-05-25 04:01:07.634660 | orchestrator | Sunday 25 May 2025 03:59:13 +0000 (0:00:00.336) 0:00:11.655 ************ 2025-05-25 04:01:07.634671 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.634682 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.634694 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.634705 | orchestrator | 2025-05-25 04:01:07.634716 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-25 04:01:07.634728 | orchestrator | Sunday 25 May 2025 03:59:13 +0000 (0:00:00.394) 0:00:12.050 ************ 2025-05-25 04:01:07.634739 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.634750 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.634762 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.634773 | orchestrator | 2025-05-25 04:01:07.634784 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-05-25 04:01:07.634796 | orchestrator | Sunday 25 May 2025 03:59:14 +0000 (0:00:00.441) 0:00:12.492 ************ 2025-05-25 04:01:07.634807 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:01:07.634818 | orchestrator | 2025-05-25 04:01:07.634891 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-05-25 04:01:07.634902 | orchestrator | Sunday 25 May 2025 03:59:14 +0000 (0:00:00.126) 0:00:12.618 ************ 2025-05-25 04:01:07.634913 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.634924 | orchestrator | 2025-05-25 04:01:07.634935 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-25 04:01:07.634946 | orchestrator | Sunday 25 May 2025 03:59:14 +0000 (0:00:00.220) 0:00:12.839 ************ 2025-05-25 04:01:07.634956 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.634967 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.634978 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.634989 | orchestrator | 2025-05-25 04:01:07.634999 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-05-25 04:01:07.635010 | orchestrator | Sunday 25 May 2025 03:59:14 +0000 (0:00:00.270) 0:00:13.109 ************ 2025-05-25 04:01:07.635021 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.635032 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.635043 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.635053 | orchestrator | 2025-05-25 04:01:07.635064 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-05-25 04:01:07.635075 | orchestrator | Sunday 25 May 2025 03:59:15 +0000 (0:00:00.319) 0:00:13.429 ************ 2025-05-25 04:01:07.635086 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.635097 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.635107 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.635118 | orchestrator | 2025-05-25 04:01:07.635129 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-05-25 04:01:07.635140 | orchestrator | Sunday 25 May 2025 03:59:15 +0000 (0:00:00.488) 0:00:13.918 ************ 2025-05-25 04:01:07.635159 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.635169 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.635180 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.635191 | orchestrator | 2025-05-25 04:01:07.635202 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-05-25 04:01:07.635213 | orchestrator | Sunday 25 May 2025 03:59:16 +0000 (0:00:00.312) 0:00:14.230 ************ 2025-05-25 04:01:07.635224 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.635234 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.635245 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.635256 | orchestrator | 2025-05-25 04:01:07.635267 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-05-25 04:01:07.635278 | orchestrator | Sunday 25 May 2025 03:59:16 +0000 (0:00:00.315) 0:00:14.545 ************ 2025-05-25 04:01:07.635288 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.635299 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.635310 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.635321 | orchestrator | 2025-05-25 04:01:07.635332 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-25 04:01:07.635378 | orchestrator | Sunday 25 May 2025 03:59:16 +0000 (0:00:00.297) 0:00:14.843 ************ 2025-05-25 04:01:07.635391 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.635402 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.635413 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.635424 | orchestrator | 2025-05-25 04:01:07.635435 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-05-25 04:01:07.635452 | orchestrator | Sunday 25 May 2025 03:59:17 +0000 (0:00:00.476) 0:00:15.319 ************ 2025-05-25 04:01:07.635464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--02f362e7--7983--50b5--b688--a41104a01860-osd--block--02f362e7--7983--50b5--b688--a41104a01860', 'dm-uuid-LVM-EIto835nqPIkh0oeoEL0S8DBvWlfCbl8H8re0YIsYzAQqybZRNhTB6UMYipVoexk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b24cffad--8a1f--50fd--b816--ada28c3c4ac7-osd--block--b24cffad--8a1f--50fd--b816--ada28c3c4ac7', 'dm-uuid-LVM-tFlPNDaJrKb6B5eh5v1xX1ivLAW9n1dXQLABeBBDprsmjK9bFTwfkVwlCJsQ0XuP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part1', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part14', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part15', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part16', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 04:01:07.635654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--02f362e7--7983--50b5--b688--a41104a01860-osd--block--02f362e7--7983--50b5--b688--a41104a01860'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qtdq0z-8J15-gP85-P9SJ-dK07-zWb5-0DnwzK', 'scsi-0QEMU_QEMU_HARDDISK_cdfa8505-de86-48ff-8ed6-b6e1381a94b2', 'scsi-SQEMU_QEMU_HARDDISK_cdfa8505-de86-48ff-8ed6-b6e1381a94b2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 04:01:07.635697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--02ca1cf7--fa58--5bc0--a798--b7d21582c1b0-osd--block--02ca1cf7--fa58--5bc0--a798--b7d21582c1b0', 'dm-uuid-LVM-HAu4Vl80XjNgQGqZh3sFVXfBzfGDPBzt7M61G9WS93n3QFc52Avm05aFGbBLGJsF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b24cffad--8a1f--50fd--b816--ada28c3c4ac7-osd--block--b24cffad--8a1f--50fd--b816--ada28c3c4ac7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zFQRk7-wHUy-2Er2-kQSV-Uuzs-Y07c-0XeRqW', 'scsi-0QEMU_QEMU_HARDDISK_4276f8fa-1a41-4d3c-8190-a1d2d3b80049', 'scsi-SQEMU_QEMU_HARDDISK_4276f8fa-1a41-4d3c-8190-a1d2d3b80049'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 04:01:07.635729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--733a1394--dd45--5d63--8d82--63858202edf3-osd--block--733a1394--dd45--5d63--8d82--63858202edf3', 'dm-uuid-LVM-N0DNQ7QOeq8qzVSMsTYiekiqreuPz8LqqVLZhgTOTRYilPVNBZZGNH3uHj9wCjop'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dac67b12-4a3b-49b0-a18f-dd9740769fda', 'scsi-SQEMU_QEMU_HARDDISK_dac67b12-4a3b-49b0-a18f-dd9740769fda'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 04:01:07.635760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 04:01:07.635784 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.635971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part1', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part14', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part15', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part16', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 04:01:07.635985 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.636002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--02ca1cf7--fa58--5bc0--a798--b7d21582c1b0-osd--block--02ca1cf7--fa58--5bc0--a798--b7d21582c1b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-75K85T-qtHF-V2PQ-3keF-McNZ-rYMq-djrq4j', 'scsi-0QEMU_QEMU_HARDDISK_17d1c6f1-1305-4025-b6c8-ee1be555c001', 'scsi-SQEMU_QEMU_HARDDISK_17d1c6f1-1305-4025-b6c8-ee1be555c001'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 04:01:07.636016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--733a1394--dd45--5d63--8d82--63858202edf3-osd--block--733a1394--dd45--5d63--8d82--63858202edf3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XBC6Ra-aDc7-yze2-aQJ9-K1bq-dbNG-W4h3yL', 'scsi-0QEMU_QEMU_HARDDISK_b0e50223-c4d0-48f7-a5f8-d1963b067c82', 'scsi-SQEMU_QEMU_HARDDISK_b0e50223-c4d0-48f7-a5f8-d1963b067c82'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 04:01:07.636035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e86a76-d592-4447-9c79-2151d2192c3f', 'scsi-SQEMU_QEMU_HARDDISK_38e86a76-d592-4447-9c79-2151d2192c3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 04:01:07.636064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 04:01:07.636084 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33e996ff--67e1--5789--9eb3--97043475c088-osd--block--33e996ff--67e1--5789--9eb3--97043475c088', 'dm-uuid-LVM-f2mxDkg5RboGiSFRnoZoE0Jf5zdoZooLX3dEGjd0x3LIyAGja8yP08lNRRkeYga4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.636103 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.636131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ece5568--3437--595e--b3ba--b2f91a77c86c-osd--block--3ece5568--3437--595e--b3ba--b2f91a77c86c', 'dm-uuid-LVM-M0xTfxjiXljnWhv0xWS2ZQJ2ZEKwlMtX3setecTbz5KjpidltETUQJYINQ7cMcdk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.636149 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.636161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.636172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.636183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.636202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.636213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.636225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.636236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 04:01:07.636261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 04:01:07.636282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--33e996ff--67e1--5789--9eb3--97043475c088-osd--block--33e996ff--67e1--5789--9eb3--97043475c088'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yI0x8n-xcOR-DPeb-Offp-taab-jv40-D8CklK', 'scsi-0QEMU_QEMU_HARDDISK_201f277c-fdb2-416e-b305-0d8ba90b32cd', 'scsi-SQEMU_QEMU_HARDDISK_201f277c-fdb2-416e-b305-0d8ba90b32cd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 04:01:07.636293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3ece5568--3437--595e--b3ba--b2f91a77c86c-osd--block--3ece5568--3437--595e--b3ba--b2f91a77c86c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TOpkSP-RmlJ-8nES-992L-XmPw-19k6-xzW621', 'scsi-0QEMU_QEMU_HARDDISK_8968a7f7-851b-405b-80f4-de48ab1dffee', 'scsi-SQEMU_QEMU_HARDDISK_8968a7f7-851b-405b-80f4-de48ab1dffee'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 04:01:07.636305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_603d0154-8a06-450e-a743-756d85b1bc6a', 'scsi-SQEMU_QEMU_HARDDISK_603d0154-8a06-450e-a743-756d85b1bc6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 04:01:07.636322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 04:01:07.636334 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.636345 | orchestrator | 2025-05-25 04:01:07.636356 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-05-25 04:01:07.636372 | orchestrator | Sunday 25 May 2025 03:59:17 +0000 (0:00:00.536) 0:00:15.855 ************ 2025-05-25 04:01:07.636384 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--02f362e7--7983--50b5--b688--a41104a01860-osd--block--02f362e7--7983--50b5--b688--a41104a01860', 'dm-uuid-LVM-EIto835nqPIkh0oeoEL0S8DBvWlfCbl8H8re0YIsYzAQqybZRNhTB6UMYipVoexk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636404 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b24cffad--8a1f--50fd--b816--ada28c3c4ac7-osd--block--b24cffad--8a1f--50fd--b816--ada28c3c4ac7', 'dm-uuid-LVM-tFlPNDaJrKb6B5eh5v1xX1ivLAW9n1dXQLABeBBDprsmjK9bFTwfkVwlCJsQ0XuP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636415 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636427 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636439 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636463 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636475 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636486 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636504 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636516 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--02ca1cf7--fa58--5bc0--a798--b7d21582c1b0-osd--block--02ca1cf7--fa58--5bc0--a798--b7d21582c1b0', 'dm-uuid-LVM-HAu4Vl80XjNgQGqZh3sFVXfBzfGDPBzt7M61G9WS93n3QFc52Avm05aFGbBLGJsF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636527 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636545 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--733a1394--dd45--5d63--8d82--63858202edf3-osd--block--733a1394--dd45--5d63--8d82--63858202edf3', 'dm-uuid-LVM-N0DNQ7QOeq8qzVSMsTYiekiqreuPz8LqqVLZhgTOTRYilPVNBZZGNH3uHj9wCjop'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636563 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part1', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part14', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part15', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part16', 'scsi-SQEMU_QEMU_HARDDISK_d33fe3e5-8e27-4cea-af5e-c9a31aaf43f7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636582 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636595 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--02f362e7--7983--50b5--b688--a41104a01860-osd--block--02f362e7--7983--50b5--b688--a41104a01860'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qtdq0z-8J15-gP85-P9SJ-dK07-zWb5-0DnwzK', 'scsi-0QEMU_QEMU_HARDDISK_cdfa8505-de86-48ff-8ed6-b6e1381a94b2', 'scsi-SQEMU_QEMU_HARDDISK_cdfa8505-de86-48ff-8ed6-b6e1381a94b2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636619 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636631 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b24cffad--8a1f--50fd--b816--ada28c3c4ac7-osd--block--b24cffad--8a1f--50fd--b816--ada28c3c4ac7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zFQRk7-wHUy-2Er2-kQSV-Uuzs-Y07c-0XeRqW', 'scsi-0QEMU_QEMU_HARDDISK_4276f8fa-1a41-4d3c-8190-a1d2d3b80049', 'scsi-SQEMU_QEMU_HARDDISK_4276f8fa-1a41-4d3c-8190-a1d2d3b80049'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636655 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636667 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dac67b12-4a3b-49b0-a18f-dd9740769fda', 'scsi-SQEMU_QEMU_HARDDISK_dac67b12-4a3b-49b0-a18f-dd9740769fda'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636679 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636696 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636713 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.636725 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636742 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636754 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636766 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636785 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part1', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part14', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part15', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part16', 'scsi-SQEMU_QEMU_HARDDISK_43c767cb-0159-4619-b6d4-e498fa6e5c83-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636804 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--02ca1cf7--fa58--5bc0--a798--b7d21582c1b0-osd--block--02ca1cf7--fa58--5bc0--a798--b7d21582c1b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-75K85T-qtHF-V2PQ-3keF-McNZ-rYMq-djrq4j', 'scsi-0QEMU_QEMU_HARDDISK_17d1c6f1-1305-4025-b6c8-ee1be555c001', 'scsi-SQEMU_QEMU_HARDDISK_17d1c6f1-1305-4025-b6c8-ee1be555c001'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636816 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33e996ff--67e1--5789--9eb3--97043475c088-osd--block--33e996ff--67e1--5789--9eb3--97043475c088', 'dm-uuid-LVM-f2mxDkg5RboGiSFRnoZoE0Jf5zdoZooLX3dEGjd0x3LIyAGja8yP08lNRRkeYga4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636889 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--733a1394--dd45--5d63--8d82--63858202edf3-osd--block--733a1394--dd45--5d63--8d82--63858202edf3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XBC6Ra-aDc7-yze2-aQJ9-K1bq-dbNG-W4h3yL', 'scsi-0QEMU_QEMU_HARDDISK_b0e50223-c4d0-48f7-a5f8-d1963b067c82', 'scsi-SQEMU_QEMU_HARDDISK_b0e50223-c4d0-48f7-a5f8-d1963b067c82'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636916 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ece5568--3437--595e--b3ba--b2f91a77c86c-osd--block--3ece5568--3437--595e--b3ba--b2f91a77c86c', 'dm-uuid-LVM-M0xTfxjiXljnWhv0xWS2ZQJ2ZEKwlMtX3setecTbz5KjpidltETUQJYINQ7cMcdk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636935 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e86a76-d592-4447-9c79-2151d2192c3f', 'scsi-SQEMU_QEMU_HARDDISK_38e86a76-d592-4447-9c79-2151d2192c3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636947 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636959 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636970 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.636982 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.636993 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.637010 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.637032 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.637044 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.637056 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.637067 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.637091 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_2fcb52f5-9fc6-4d4d-aaa1-77a08e6dc4e0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.637110 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--33e996ff--67e1--5789--9eb3--97043475c088-osd--block--33e996ff--67e1--5789--9eb3--97043475c088'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yI0x8n-xcOR-DPeb-Offp-taab-jv40-D8CklK', 'scsi-0QEMU_QEMU_HARDDISK_201f277c-fdb2-416e-b305-0d8ba90b32cd', 'scsi-SQEMU_QEMU_HARDDISK_201f277c-fdb2-416e-b305-0d8ba90b32cd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.637122 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3ece5568--3437--595e--b3ba--b2f91a77c86c-osd--block--3ece5568--3437--595e--b3ba--b2f91a77c86c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TOpkSP-RmlJ-8nES-992L-XmPw-19k6-xzW621', 'scsi-0QEMU_QEMU_HARDDISK_8968a7f7-851b-405b-80f4-de48ab1dffee', 'scsi-SQEMU_QEMU_HARDDISK_8968a7f7-851b-405b-80f4-de48ab1dffee'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.637134 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_603d0154-8a06-450e-a743-756d85b1bc6a', 'scsi-SQEMU_QEMU_HARDDISK_603d0154-8a06-450e-a743-756d85b1bc6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.637156 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-03-02-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-25 04:01:07.637174 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.637186 | orchestrator | 2025-05-25 04:01:07.637197 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-05-25 04:01:07.637208 | orchestrator | Sunday 25 May 2025 03:59:18 +0000 (0:00:00.581) 0:00:16.437 ************ 2025-05-25 04:01:07.637219 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:01:07.637230 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:01:07.637241 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:01:07.637252 | orchestrator | 2025-05-25 04:01:07.637263 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-05-25 04:01:07.637274 | orchestrator | Sunday 25 May 2025 03:59:18 +0000 (0:00:00.650) 0:00:17.087 ************ 2025-05-25 04:01:07.637285 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:01:07.637296 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:01:07.637307 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:01:07.637318 | orchestrator | 2025-05-25 04:01:07.637328 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-25 04:01:07.637340 | orchestrator | Sunday 25 May 2025 03:59:19 +0000 (0:00:00.470) 0:00:17.558 ************ 2025-05-25 04:01:07.637350 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:01:07.637361 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:01:07.637372 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:01:07.637383 | orchestrator | 2025-05-25 04:01:07.637394 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-25 04:01:07.637405 | orchestrator | Sunday 25 May 2025 03:59:20 +0000 (0:00:00.629) 0:00:18.188 ************ 2025-05-25 04:01:07.637416 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.637427 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.637438 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.637449 | orchestrator | 2025-05-25 04:01:07.637460 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-25 04:01:07.637471 | orchestrator | Sunday 25 May 2025 03:59:20 +0000 (0:00:00.290) 0:00:18.478 ************ 2025-05-25 04:01:07.637482 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.637493 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.637503 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.637514 | orchestrator | 2025-05-25 04:01:07.637525 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-25 04:01:07.637536 | orchestrator | Sunday 25 May 2025 03:59:20 +0000 (0:00:00.391) 0:00:18.870 ************ 2025-05-25 04:01:07.637547 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.637558 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.637569 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.637579 | orchestrator | 2025-05-25 04:01:07.637591 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-05-25 04:01:07.637601 | orchestrator | Sunday 25 May 2025 03:59:21 +0000 (0:00:00.463) 0:00:19.334 ************ 2025-05-25 04:01:07.637612 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-25 04:01:07.637624 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-25 04:01:07.637635 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-25 04:01:07.637646 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-25 04:01:07.637657 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-25 04:01:07.637668 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-25 04:01:07.637679 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-25 04:01:07.637697 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-25 04:01:07.637708 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-25 04:01:07.637719 | orchestrator | 2025-05-25 04:01:07.637730 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-05-25 04:01:07.637741 | orchestrator | Sunday 25 May 2025 03:59:22 +0000 (0:00:00.834) 0:00:20.169 ************ 2025-05-25 04:01:07.637752 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-25 04:01:07.637763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-25 04:01:07.637774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-25 04:01:07.637784 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.637795 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-25 04:01:07.637806 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-25 04:01:07.637817 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-25 04:01:07.637899 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.637911 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-25 04:01:07.637922 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-25 04:01:07.637933 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-25 04:01:07.637944 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.637955 | orchestrator | 2025-05-25 04:01:07.637966 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-05-25 04:01:07.637977 | orchestrator | Sunday 25 May 2025 03:59:22 +0000 (0:00:00.325) 0:00:20.494 ************ 2025-05-25 04:01:07.637988 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 04:01:07.637999 | orchestrator | 2025-05-25 04:01:07.638010 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-25 04:01:07.638080 | orchestrator | Sunday 25 May 2025 03:59:23 +0000 (0:00:00.673) 0:00:21.168 ************ 2025-05-25 04:01:07.638092 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.638103 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.638114 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.638125 | orchestrator | 2025-05-25 04:01:07.638144 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-25 04:01:07.638156 | orchestrator | Sunday 25 May 2025 03:59:23 +0000 (0:00:00.297) 0:00:21.465 ************ 2025-05-25 04:01:07.638167 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.638178 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.638189 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.638199 | orchestrator | 2025-05-25 04:01:07.638217 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-25 04:01:07.638227 | orchestrator | Sunday 25 May 2025 03:59:23 +0000 (0:00:00.310) 0:00:21.776 ************ 2025-05-25 04:01:07.638237 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.638247 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.638257 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:01:07.638266 | orchestrator | 2025-05-25 04:01:07.638276 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-25 04:01:07.638286 | orchestrator | Sunday 25 May 2025 03:59:23 +0000 (0:00:00.305) 0:00:22.082 ************ 2025-05-25 04:01:07.638296 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:01:07.638305 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:01:07.638315 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:01:07.638325 | orchestrator | 2025-05-25 04:01:07.638335 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-25 04:01:07.638345 | orchestrator | Sunday 25 May 2025 03:59:24 +0000 (0:00:00.606) 0:00:22.688 ************ 2025-05-25 04:01:07.638355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 04:01:07.638365 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 04:01:07.638385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 04:01:07.638395 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.638405 | orchestrator | 2025-05-25 04:01:07.638414 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-25 04:01:07.638424 | orchestrator | Sunday 25 May 2025 03:59:24 +0000 (0:00:00.391) 0:00:23.079 ************ 2025-05-25 04:01:07.638434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 04:01:07.638444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 04:01:07.638453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 04:01:07.638463 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.638473 | orchestrator | 2025-05-25 04:01:07.638482 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-25 04:01:07.638492 | orchestrator | Sunday 25 May 2025 03:59:25 +0000 (0:00:00.380) 0:00:23.460 ************ 2025-05-25 04:01:07.638502 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 04:01:07.638511 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 04:01:07.638521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 04:01:07.638531 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.638540 | orchestrator | 2025-05-25 04:01:07.638550 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-25 04:01:07.638560 | orchestrator | Sunday 25 May 2025 03:59:25 +0000 (0:00:00.345) 0:00:23.805 ************ 2025-05-25 04:01:07.638570 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:01:07.638580 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:01:07.638589 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:01:07.638599 | orchestrator | 2025-05-25 04:01:07.638609 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-25 04:01:07.638618 | orchestrator | Sunday 25 May 2025 03:59:25 +0000 (0:00:00.305) 0:00:24.111 ************ 2025-05-25 04:01:07.638628 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-25 04:01:07.638638 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-25 04:01:07.638647 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-25 04:01:07.638657 | orchestrator | 2025-05-25 04:01:07.638667 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-05-25 04:01:07.638677 | orchestrator | Sunday 25 May 2025 03:59:26 +0000 (0:00:00.497) 0:00:24.609 ************ 2025-05-25 04:01:07.638687 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-25 04:01:07.638696 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 04:01:07.638706 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 04:01:07.638716 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-25 04:01:07.638725 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-25 04:01:07.638735 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-25 04:01:07.638745 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-25 04:01:07.638755 | orchestrator | 2025-05-25 04:01:07.638764 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-05-25 04:01:07.638774 | orchestrator | Sunday 25 May 2025 03:59:27 +0000 (0:00:00.915) 0:00:25.524 ************ 2025-05-25 04:01:07.638784 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-25 04:01:07.638793 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 04:01:07.638803 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 04:01:07.638813 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-25 04:01:07.638847 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-25 04:01:07.638865 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-25 04:01:07.638875 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-25 04:01:07.638884 | orchestrator | 2025-05-25 04:01:07.638899 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-25 04:01:07.638910 | orchestrator | Sunday 25 May 2025 03:59:29 +0000 (0:00:01.818) 0:00:27.343 ************ 2025-05-25 04:01:07.638919 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:01:07.638929 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:01:07.638944 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-25 04:01:07.638954 | orchestrator | 2025-05-25 04:01:07.638964 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-25 04:01:07.638973 | orchestrator | Sunday 25 May 2025 03:59:29 +0000 (0:00:00.354) 0:00:27.698 ************ 2025-05-25 04:01:07.638984 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-25 04:01:07.638994 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-25 04:01:07.639005 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-25 04:01:07.639015 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-25 04:01:07.639025 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-25 04:01:07.639035 | orchestrator | 2025-05-25 04:01:07.639045 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-25 04:01:07.639054 | orchestrator | Sunday 25 May 2025 04:00:14 +0000 (0:00:44.593) 0:01:12.292 ************ 2025-05-25 04:01:07.639064 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639074 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639083 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639093 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639103 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639112 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639122 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-25 04:01:07.639131 | orchestrator | 2025-05-25 04:01:07.639141 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-25 04:01:07.639151 | orchestrator | Sunday 25 May 2025 04:00:37 +0000 (0:00:23.286) 0:01:35.578 ************ 2025-05-25 04:01:07.639160 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639170 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639186 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639195 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639205 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639215 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639224 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-25 04:01:07.639234 | orchestrator | 2025-05-25 04:01:07.639244 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-25 04:01:07.639253 | orchestrator | Sunday 25 May 2025 04:00:49 +0000 (0:00:12.047) 0:01:47.626 ************ 2025-05-25 04:01:07.639263 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639273 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-25 04:01:07.639282 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-25 04:01:07.639292 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639301 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-25 04:01:07.639311 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-25 04:01:07.639325 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639335 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-25 04:01:07.639345 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-25 04:01:07.639361 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639371 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-25 04:01:07.639381 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-25 04:01:07.639390 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639400 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-25 04:01:07.639409 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-25 04:01:07.639419 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 04:01:07.639429 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-25 04:01:07.639438 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-25 04:01:07.639448 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-25 04:01:07.639458 | orchestrator | 2025-05-25 04:01:07.639467 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:01:07.639477 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-05-25 04:01:07.639488 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-25 04:01:07.639498 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-25 04:01:07.639508 | orchestrator | 2025-05-25 04:01:07.639517 | orchestrator | 2025-05-25 04:01:07.639527 | orchestrator | 2025-05-25 04:01:07.639537 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:01:07.639547 | orchestrator | Sunday 25 May 2025 04:01:06 +0000 (0:00:16.862) 0:02:04.488 ************ 2025-05-25 04:01:07.639556 | orchestrator | =============================================================================== 2025-05-25 04:01:07.639571 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.59s 2025-05-25 04:01:07.639581 | orchestrator | generate keys ---------------------------------------------------------- 23.29s 2025-05-25 04:01:07.639591 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.86s 2025-05-25 04:01:07.639601 | orchestrator | get keys from monitors ------------------------------------------------- 12.05s 2025-05-25 04:01:07.639610 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.06s 2025-05-25 04:01:07.639620 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.82s 2025-05-25 04:01:07.639629 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.57s 2025-05-25 04:01:07.639639 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.92s 2025-05-25 04:01:07.639649 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.83s 2025-05-25 04:01:07.639658 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.75s 2025-05-25 04:01:07.639668 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.74s 2025-05-25 04:01:07.639678 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.67s 2025-05-25 04:01:07.639687 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.65s 2025-05-25 04:01:07.639697 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.64s 2025-05-25 04:01:07.639706 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.63s 2025-05-25 04:01:07.639716 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.61s 2025-05-25 04:01:07.639725 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.61s 2025-05-25 04:01:07.639735 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.60s 2025-05-25 04:01:07.639745 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.58s 2025-05-25 04:01:07.639754 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.54s 2025-05-25 04:01:07.639764 | orchestrator | 2025-05-25 04:01:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:10.685332 | orchestrator | 2025-05-25 04:01:10 | INFO  | Task 99668bf5-3424-4b3a-8674-e2d5e8b54c5f is in state STARTED 2025-05-25 04:01:10.687297 | orchestrator | 2025-05-25 04:01:10 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:10.690287 | orchestrator | 2025-05-25 04:01:10 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:01:10.690699 | orchestrator | 2025-05-25 04:01:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:13.740541 | orchestrator | 2025-05-25 04:01:13 | INFO  | Task 99668bf5-3424-4b3a-8674-e2d5e8b54c5f is in state STARTED 2025-05-25 04:01:13.741349 | orchestrator | 2025-05-25 04:01:13 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:13.742687 | orchestrator | 2025-05-25 04:01:13 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:01:13.742746 | orchestrator | 2025-05-25 04:01:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:16.788913 | orchestrator | 2025-05-25 04:01:16 | INFO  | Task 99668bf5-3424-4b3a-8674-e2d5e8b54c5f is in state STARTED 2025-05-25 04:01:16.792315 | orchestrator | 2025-05-25 04:01:16 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:16.793502 | orchestrator | 2025-05-25 04:01:16 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:01:16.793546 | orchestrator | 2025-05-25 04:01:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:19.849472 | orchestrator | 2025-05-25 04:01:19 | INFO  | Task 99668bf5-3424-4b3a-8674-e2d5e8b54c5f is in state STARTED 2025-05-25 04:01:19.850317 | orchestrator | 2025-05-25 04:01:19 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:19.852044 | orchestrator | 2025-05-25 04:01:19 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:01:19.852169 | orchestrator | 2025-05-25 04:01:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:22.917720 | orchestrator | 2025-05-25 04:01:22 | INFO  | Task 99668bf5-3424-4b3a-8674-e2d5e8b54c5f is in state STARTED 2025-05-25 04:01:22.919524 | orchestrator | 2025-05-25 04:01:22 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:22.921942 | orchestrator | 2025-05-25 04:01:22 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:01:22.921992 | orchestrator | 2025-05-25 04:01:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:25.980406 | orchestrator | 2025-05-25 04:01:25 | INFO  | Task 99668bf5-3424-4b3a-8674-e2d5e8b54c5f is in state STARTED 2025-05-25 04:01:25.981634 | orchestrator | 2025-05-25 04:01:25 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:25.984128 | orchestrator | 2025-05-25 04:01:25 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:01:25.984164 | orchestrator | 2025-05-25 04:01:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:29.045254 | orchestrator | 2025-05-25 04:01:29 | INFO  | Task 99668bf5-3424-4b3a-8674-e2d5e8b54c5f is in state STARTED 2025-05-25 04:01:29.045704 | orchestrator | 2025-05-25 04:01:29 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:29.046801 | orchestrator | 2025-05-25 04:01:29 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:01:29.046848 | orchestrator | 2025-05-25 04:01:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:32.092914 | orchestrator | 2025-05-25 04:01:32 | INFO  | Task 99668bf5-3424-4b3a-8674-e2d5e8b54c5f is in state STARTED 2025-05-25 04:01:32.094867 | orchestrator | 2025-05-25 04:01:32 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:32.096514 | orchestrator | 2025-05-25 04:01:32 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:01:32.096554 | orchestrator | 2025-05-25 04:01:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:35.149343 | orchestrator | 2025-05-25 04:01:35 | INFO  | Task 99668bf5-3424-4b3a-8674-e2d5e8b54c5f is in state STARTED 2025-05-25 04:01:35.149884 | orchestrator | 2025-05-25 04:01:35 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:35.152038 | orchestrator | 2025-05-25 04:01:35 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:01:35.152444 | orchestrator | 2025-05-25 04:01:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:38.203926 | orchestrator | 2025-05-25 04:01:38 | INFO  | Task 99668bf5-3424-4b3a-8674-e2d5e8b54c5f is in state SUCCESS 2025-05-25 04:01:38.204060 | orchestrator | 2025-05-25 04:01:38 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:38.204665 | orchestrator | 2025-05-25 04:01:38 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:01:38.206352 | orchestrator | 2025-05-25 04:01:38 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state STARTED 2025-05-25 04:01:38.206385 | orchestrator | 2025-05-25 04:01:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:41.256460 | orchestrator | 2025-05-25 04:01:41 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:41.258773 | orchestrator | 2025-05-25 04:01:41 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:01:41.261548 | orchestrator | 2025-05-25 04:01:41 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state STARTED 2025-05-25 04:01:41.261693 | orchestrator | 2025-05-25 04:01:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:44.309216 | orchestrator | 2025-05-25 04:01:44 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:44.311062 | orchestrator | 2025-05-25 04:01:44 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:01:44.313018 | orchestrator | 2025-05-25 04:01:44 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state STARTED 2025-05-25 04:01:44.313419 | orchestrator | 2025-05-25 04:01:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:47.365624 | orchestrator | 2025-05-25 04:01:47 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:47.367981 | orchestrator | 2025-05-25 04:01:47 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:01:47.369903 | orchestrator | 2025-05-25 04:01:47 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state STARTED 2025-05-25 04:01:47.369957 | orchestrator | 2025-05-25 04:01:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:50.421602 | orchestrator | 2025-05-25 04:01:50 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:50.423983 | orchestrator | 2025-05-25 04:01:50 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state STARTED 2025-05-25 04:01:50.426192 | orchestrator | 2025-05-25 04:01:50 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state STARTED 2025-05-25 04:01:50.426810 | orchestrator | 2025-05-25 04:01:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:53.471415 | orchestrator | 2025-05-25 04:01:53 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:53.473067 | orchestrator | 2025-05-25 04:01:53 | INFO  | Task 5e38feec-0d6a-4d0c-9e6e-16f705495470 is in state SUCCESS 2025-05-25 04:01:53.474440 | orchestrator | 2025-05-25 04:01:53.474483 | orchestrator | 2025-05-25 04:01:53.474496 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-25 04:01:53.474508 | orchestrator | 2025-05-25 04:01:53.474519 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-05-25 04:01:53.474531 | orchestrator | Sunday 25 May 2025 04:01:10 +0000 (0:00:00.150) 0:00:00.150 ************ 2025-05-25 04:01:53.474542 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-05-25 04:01:53.474555 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-25 04:01:53.474566 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-25 04:01:53.474577 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-05-25 04:01:53.474962 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-25 04:01:53.474979 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-05-25 04:01:53.474990 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-05-25 04:01:53.475001 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-05-25 04:01:53.475012 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-05-25 04:01:53.475053 | orchestrator | 2025-05-25 04:01:53.475065 | orchestrator | TASK [Create share directory] ************************************************** 2025-05-25 04:01:53.475076 | orchestrator | Sunday 25 May 2025 04:01:14 +0000 (0:00:04.150) 0:00:04.300 ************ 2025-05-25 04:01:53.475088 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-25 04:01:53.475099 | orchestrator | 2025-05-25 04:01:53.475110 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-05-25 04:01:53.475121 | orchestrator | Sunday 25 May 2025 04:01:15 +0000 (0:00:00.940) 0:00:05.240 ************ 2025-05-25 04:01:53.475132 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-25 04:01:53.475143 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-25 04:01:53.475154 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-25 04:01:53.475165 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-25 04:01:53.475176 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-25 04:01:53.475186 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-25 04:01:53.475197 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-25 04:01:53.475208 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-25 04:01:53.475233 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-25 04:01:53.475244 | orchestrator | 2025-05-25 04:01:53.475255 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-05-25 04:01:53.475266 | orchestrator | Sunday 25 May 2025 04:01:28 +0000 (0:00:12.779) 0:00:18.020 ************ 2025-05-25 04:01:53.475278 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-05-25 04:01:53.475288 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-25 04:01:53.475299 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-25 04:01:53.475310 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-05-25 04:01:53.475321 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-25 04:01:53.475331 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-05-25 04:01:53.475342 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-05-25 04:01:53.475353 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-05-25 04:01:53.475364 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-05-25 04:01:53.475375 | orchestrator | 2025-05-25 04:01:53.475385 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:01:53.475396 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:01:53.475409 | orchestrator | 2025-05-25 04:01:53.475419 | orchestrator | 2025-05-25 04:01:53.475430 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:01:53.475441 | orchestrator | Sunday 25 May 2025 04:01:35 +0000 (0:00:06.536) 0:00:24.557 ************ 2025-05-25 04:01:53.475452 | orchestrator | =============================================================================== 2025-05-25 04:01:53.475463 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.78s 2025-05-25 04:01:53.475474 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.54s 2025-05-25 04:01:53.475484 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.15s 2025-05-25 04:01:53.475495 | orchestrator | Create share directory -------------------------------------------------- 0.94s 2025-05-25 04:01:53.475506 | orchestrator | 2025-05-25 04:01:53.475525 | orchestrator | 2025-05-25 04:01:53.475536 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:01:53.475547 | orchestrator | 2025-05-25 04:01:53.475569 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:01:53.475583 | orchestrator | Sunday 25 May 2025 04:00:06 +0000 (0:00:00.227) 0:00:00.227 ************ 2025-05-25 04:01:53.475596 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:01:53.475609 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:01:53.475621 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:01:53.475634 | orchestrator | 2025-05-25 04:01:53.475646 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:01:53.475687 | orchestrator | Sunday 25 May 2025 04:00:07 +0000 (0:00:00.257) 0:00:00.485 ************ 2025-05-25 04:01:53.475701 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-25 04:01:53.475714 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-25 04:01:53.475726 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-25 04:01:53.475739 | orchestrator | 2025-05-25 04:01:53.475751 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-25 04:01:53.475763 | orchestrator | 2025-05-25 04:01:53.475776 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-25 04:01:53.475788 | orchestrator | Sunday 25 May 2025 04:00:07 +0000 (0:00:00.348) 0:00:00.833 ************ 2025-05-25 04:01:53.475839 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:01:53.475852 | orchestrator | 2025-05-25 04:01:53.475864 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-25 04:01:53.475878 | orchestrator | Sunday 25 May 2025 04:00:07 +0000 (0:00:00.429) 0:00:01.262 ************ 2025-05-25 04:01:53.475905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 04:01:53.475938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 04:01:53.475970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 04:01:53.475990 | orchestrator | 2025-05-25 04:01:53.476001 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-25 04:01:53.476012 | orchestrator | Sunday 25 May 2025 04:00:08 +0000 (0:00:01.039) 0:00:02.302 ************ 2025-05-25 04:01:53.476023 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:01:53.476034 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:01:53.476045 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:01:53.476056 | orchestrator | 2025-05-25 04:01:53.476067 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-25 04:01:53.476078 | orchestrator | Sunday 25 May 2025 04:00:09 +0000 (0:00:00.365) 0:00:02.668 ************ 2025-05-25 04:01:53.476088 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-25 04:01:53.476099 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-25 04:01:53.476116 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-25 04:01:53.476127 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-25 04:01:53.476138 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-25 04:01:53.476149 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-25 04:01:53.476160 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-25 04:01:53.476171 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-25 04:01:53.476182 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-25 04:01:53.476193 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-25 04:01:53.476204 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-25 04:01:53.476214 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-25 04:01:53.476225 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-25 04:01:53.476236 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-25 04:01:53.476247 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-25 04:01:53.476258 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-25 04:01:53.476269 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-25 04:01:53.476280 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-25 04:01:53.476290 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-25 04:01:53.476301 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-25 04:01:53.476312 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-25 04:01:53.476322 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-25 04:01:53.476333 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-25 04:01:53.476344 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-25 04:01:53.476356 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-25 04:01:53.476369 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-25 04:01:53.476391 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-25 04:01:53.476403 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-25 04:01:53.476414 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-25 04:01:53.476425 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-25 04:01:53.476436 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-25 04:01:53.476446 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-25 04:01:53.476457 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-25 04:01:53.476468 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-25 04:01:53.476479 | orchestrator | 2025-05-25 04:01:53.476490 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 04:01:53.476501 | orchestrator | Sunday 25 May 2025 04:00:09 +0000 (0:00:00.626) 0:00:03.294 ************ 2025-05-25 04:01:53.476512 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:01:53.476523 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:01:53.476534 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:01:53.476544 | orchestrator | 2025-05-25 04:01:53.476555 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 04:01:53.476566 | orchestrator | Sunday 25 May 2025 04:00:10 +0000 (0:00:00.274) 0:00:03.569 ************ 2025-05-25 04:01:53.476577 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.476588 | orchestrator | 2025-05-25 04:01:53.476599 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 04:01:53.476615 | orchestrator | Sunday 25 May 2025 04:00:10 +0000 (0:00:00.115) 0:00:03.684 ************ 2025-05-25 04:01:53.476642 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.476653 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:01:53.476665 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:01:53.476675 | orchestrator | 2025-05-25 04:01:53.476686 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 04:01:53.476697 | orchestrator | Sunday 25 May 2025 04:00:10 +0000 (0:00:00.447) 0:00:04.131 ************ 2025-05-25 04:01:53.476708 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:01:53.476719 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:01:53.476730 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:01:53.476740 | orchestrator | 2025-05-25 04:01:53.476819 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 04:01:53.476831 | orchestrator | Sunday 25 May 2025 04:00:11 +0000 (0:00:00.326) 0:00:04.457 ************ 2025-05-25 04:01:53.476842 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.476853 | orchestrator | 2025-05-25 04:01:53.476863 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 04:01:53.476874 | orchestrator | Sunday 25 May 2025 04:00:11 +0000 (0:00:00.126) 0:00:04.584 ************ 2025-05-25 04:01:53.476885 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.476896 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:01:53.476907 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:01:53.476918 | orchestrator | 2025-05-25 04:01:53.476929 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 04:01:53.476939 | orchestrator | Sunday 25 May 2025 04:00:11 +0000 (0:00:00.252) 0:00:04.836 ************ 2025-05-25 04:01:53.476958 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:01:53.476969 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:01:53.476980 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:01:53.476991 | orchestrator | 2025-05-25 04:01:53.477002 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 04:01:53.477012 | orchestrator | Sunday 25 May 2025 04:00:11 +0000 (0:00:00.296) 0:00:05.133 ************ 2025-05-25 04:01:53.477023 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.477048 | orchestrator | 2025-05-25 04:01:53.477059 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 04:01:53.477082 | orchestrator | Sunday 25 May 2025 04:00:12 +0000 (0:00:00.351) 0:00:05.485 ************ 2025-05-25 04:01:53.477093 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.477104 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:01:53.477115 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:01:53.477126 | orchestrator | 2025-05-25 04:01:53.477137 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 04:01:53.477147 | orchestrator | Sunday 25 May 2025 04:00:12 +0000 (0:00:00.320) 0:00:05.805 ************ 2025-05-25 04:01:53.477158 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:01:53.477169 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:01:53.477180 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:01:53.477190 | orchestrator | 2025-05-25 04:01:53.477201 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 04:01:53.477212 | orchestrator | Sunday 25 May 2025 04:00:12 +0000 (0:00:00.308) 0:00:06.114 ************ 2025-05-25 04:01:53.477223 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.477234 | orchestrator | 2025-05-25 04:01:53.477244 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 04:01:53.477255 | orchestrator | Sunday 25 May 2025 04:00:12 +0000 (0:00:00.129) 0:00:06.243 ************ 2025-05-25 04:01:53.477339 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.477351 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:01:53.477362 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:01:53.477373 | orchestrator | 2025-05-25 04:01:53.477384 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 04:01:53.477428 | orchestrator | Sunday 25 May 2025 04:00:13 +0000 (0:00:00.284) 0:00:06.528 ************ 2025-05-25 04:01:53.477441 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:01:53.477452 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:01:53.477463 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:01:53.477474 | orchestrator | 2025-05-25 04:01:53.477485 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 04:01:53.477495 | orchestrator | Sunday 25 May 2025 04:00:13 +0000 (0:00:00.506) 0:00:07.034 ************ 2025-05-25 04:01:53.477506 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.477517 | orchestrator | 2025-05-25 04:01:53.477528 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 04:01:53.477539 | orchestrator | Sunday 25 May 2025 04:00:13 +0000 (0:00:00.125) 0:00:07.160 ************ 2025-05-25 04:01:53.477550 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.477560 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:01:53.477571 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:01:53.477582 | orchestrator | 2025-05-25 04:01:53.477593 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 04:01:53.477603 | orchestrator | Sunday 25 May 2025 04:00:14 +0000 (0:00:00.287) 0:00:07.447 ************ 2025-05-25 04:01:53.477614 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:01:53.477625 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:01:53.477636 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:01:53.477647 | orchestrator | 2025-05-25 04:01:53.477658 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 04:01:53.477669 | orchestrator | Sunday 25 May 2025 04:00:14 +0000 (0:00:00.292) 0:00:07.740 ************ 2025-05-25 04:01:53.477702 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.477714 | orchestrator | 2025-05-25 04:01:53.477725 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 04:01:53.477735 | orchestrator | Sunday 25 May 2025 04:00:14 +0000 (0:00:00.123) 0:00:07.864 ************ 2025-05-25 04:01:53.477746 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.477757 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:01:53.477768 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:01:53.477779 | orchestrator | 2025-05-25 04:01:53.477870 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 04:01:53.477885 | orchestrator | Sunday 25 May 2025 04:00:15 +0000 (0:00:00.457) 0:00:08.321 ************ 2025-05-25 04:01:53.477896 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:01:53.477906 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:01:53.477917 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:01:53.477928 | orchestrator | 2025-05-25 04:01:53.477947 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 04:01:53.477959 | orchestrator | Sunday 25 May 2025 04:00:15 +0000 (0:00:00.321) 0:00:08.642 ************ 2025-05-25 04:01:53.477970 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.477981 | orchestrator | 2025-05-25 04:01:53.477992 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 04:01:53.478002 | orchestrator | Sunday 25 May 2025 04:00:15 +0000 (0:00:00.180) 0:00:08.823 ************ 2025-05-25 04:01:53.478013 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.478063 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:01:53.478075 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:01:53.478085 | orchestrator | 2025-05-25 04:01:53.478096 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 04:01:53.478107 | orchestrator | Sunday 25 May 2025 04:00:15 +0000 (0:00:00.305) 0:00:09.128 ************ 2025-05-25 04:01:53.478118 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:01:53.478129 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:01:53.478140 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:01:53.478150 | orchestrator | 2025-05-25 04:01:53.478161 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 04:01:53.478172 | orchestrator | Sunday 25 May 2025 04:00:16 +0000 (0:00:00.332) 0:00:09.461 ************ 2025-05-25 04:01:53.478183 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.478194 | orchestrator | 2025-05-25 04:01:53.478205 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 04:01:53.478215 | orchestrator | Sunday 25 May 2025 04:00:16 +0000 (0:00:00.116) 0:00:09.578 ************ 2025-05-25 04:01:53.478226 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.478237 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:01:53.478248 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:01:53.478259 | orchestrator | 2025-05-25 04:01:53.478270 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 04:01:53.478280 | orchestrator | Sunday 25 May 2025 04:00:16 +0000 (0:00:00.481) 0:00:10.059 ************ 2025-05-25 04:01:53.478291 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:01:53.478302 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:01:53.478313 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:01:53.478323 | orchestrator | 2025-05-25 04:01:53.478334 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 04:01:53.478345 | orchestrator | Sunday 25 May 2025 04:00:17 +0000 (0:00:00.304) 0:00:10.364 ************ 2025-05-25 04:01:53.478355 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.478366 | orchestrator | 2025-05-25 04:01:53.478391 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 04:01:53.478403 | orchestrator | Sunday 25 May 2025 04:00:17 +0000 (0:00:00.127) 0:00:10.491 ************ 2025-05-25 04:01:53.478414 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.478425 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:01:53.478444 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:01:53.478453 | orchestrator | 2025-05-25 04:01:53.478463 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 04:01:53.478473 | orchestrator | Sunday 25 May 2025 04:00:17 +0000 (0:00:00.262) 0:00:10.753 ************ 2025-05-25 04:01:53.478483 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:01:53.478492 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:01:53.478502 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:01:53.478512 | orchestrator | 2025-05-25 04:01:53.478527 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 04:01:53.478537 | orchestrator | Sunday 25 May 2025 04:00:17 +0000 (0:00:00.478) 0:00:11.231 ************ 2025-05-25 04:01:53.478546 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.478556 | orchestrator | 2025-05-25 04:01:53.478572 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 04:01:53.478588 | orchestrator | Sunday 25 May 2025 04:00:18 +0000 (0:00:00.134) 0:00:11.366 ************ 2025-05-25 04:01:53.478605 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.478621 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:01:53.478637 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:01:53.478654 | orchestrator | 2025-05-25 04:01:53.478664 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-25 04:01:53.478674 | orchestrator | Sunday 25 May 2025 04:00:18 +0000 (0:00:00.309) 0:00:11.675 ************ 2025-05-25 04:01:53.478683 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:01:53.478693 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:01:53.478702 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:01:53.478712 | orchestrator | 2025-05-25 04:01:53.478721 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-25 04:01:53.478731 | orchestrator | Sunday 25 May 2025 04:00:19 +0000 (0:00:01.522) 0:00:13.198 ************ 2025-05-25 04:01:53.478740 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-25 04:01:53.478750 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-25 04:01:53.478759 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-25 04:01:53.478769 | orchestrator | 2025-05-25 04:01:53.478778 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-25 04:01:53.478788 | orchestrator | Sunday 25 May 2025 04:00:21 +0000 (0:00:02.032) 0:00:15.231 ************ 2025-05-25 04:01:53.478824 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-25 04:01:53.478842 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-25 04:01:53.478860 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-25 04:01:53.478875 | orchestrator | 2025-05-25 04:01:53.478888 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-25 04:01:53.478898 | orchestrator | Sunday 25 May 2025 04:00:24 +0000 (0:00:02.237) 0:00:17.468 ************ 2025-05-25 04:01:53.478917 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-25 04:01:53.478927 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-25 04:01:53.478937 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-25 04:01:53.478946 | orchestrator | 2025-05-25 04:01:53.478956 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-25 04:01:53.478966 | orchestrator | Sunday 25 May 2025 04:00:25 +0000 (0:00:01.514) 0:00:18.983 ************ 2025-05-25 04:01:53.478975 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.478985 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:01:53.478995 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:01:53.479018 | orchestrator | 2025-05-25 04:01:53.479028 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-25 04:01:53.479037 | orchestrator | Sunday 25 May 2025 04:00:25 +0000 (0:00:00.287) 0:00:19.270 ************ 2025-05-25 04:01:53.479047 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.479057 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:01:53.479067 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:01:53.479076 | orchestrator | 2025-05-25 04:01:53.479086 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-25 04:01:53.479096 | orchestrator | Sunday 25 May 2025 04:00:26 +0000 (0:00:00.286) 0:00:19.556 ************ 2025-05-25 04:01:53.479105 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:01:53.479129 | orchestrator | 2025-05-25 04:01:53.479139 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-25 04:01:53.479149 | orchestrator | Sunday 25 May 2025 04:00:27 +0000 (0:00:00.777) 0:00:20.334 ************ 2025-05-25 04:01:53.479168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 04:01:53.479201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 04:01:53.479226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 04:01:53.479237 | orchestrator | 2025-05-25 04:01:53.479247 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-25 04:01:53.479257 | orchestrator | Sunday 25 May 2025 04:00:28 +0000 (0:00:01.565) 0:00:21.900 ************ 2025-05-25 04:01:53.479276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 04:01:53.479293 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.479310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 04:01:53.479335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 04:01:53.479346 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:01:53.479356 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:01:53.479366 | orchestrator | 2025-05-25 04:01:53.479380 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-25 04:01:53.479390 | orchestrator | Sunday 25 May 2025 04:00:29 +0000 (0:00:00.611) 0:00:22.511 ************ 2025-05-25 04:01:53.479406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 04:01:53.479423 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.479439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 04:01:53.479450 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:01:53.479467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 04:01:53.479484 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:01:53.479494 | orchestrator | 2025-05-25 04:01:53.479504 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-25 04:01:53.479513 | orchestrator | Sunday 25 May 2025 04:00:30 +0000 (0:00:01.016) 0:00:23.528 ************ 2025-05-25 04:01:53.479529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 04:01:53.479548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 04:01:53.479570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 04:01:53.479581 | orchestrator | 2025-05-25 04:01:53.479591 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-25 04:01:53.479601 | orchestrator | Sunday 25 May 2025 04:00:31 +0000 (0:00:01.339) 0:00:24.868 ************ 2025-05-25 04:01:53.479616 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:01:53.479626 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:01:53.479636 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:01:53.479645 | orchestrator | 2025-05-25 04:01:53.479655 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-25 04:01:53.479665 | orchestrator | Sunday 25 May 2025 04:00:31 +0000 (0:00:00.294) 0:00:25.162 ************ 2025-05-25 04:01:53.479675 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:01:53.479684 | orchestrator | 2025-05-25 04:01:53.479694 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-25 04:01:53.479704 | orchestrator | Sunday 25 May 2025 04:00:32 +0000 (0:00:00.687) 0:00:25.850 ************ 2025-05-25 04:01:53.479713 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:01:53.479723 | orchestrator | 2025-05-25 04:01:53.479738 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-25 04:01:53.479748 | orchestrator | Sunday 25 May 2025 04:00:34 +0000 (0:00:02.051) 0:00:27.901 ************ 2025-05-25 04:01:53.479758 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:01:53.479768 | orchestrator | 2025-05-25 04:01:53.479777 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-25 04:01:53.479787 | orchestrator | Sunday 25 May 2025 04:00:36 +0000 (0:00:02.023) 0:00:29.924 ************ 2025-05-25 04:01:53.479850 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:01:53.479860 | orchestrator | 2025-05-25 04:01:53.479870 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-25 04:01:53.479880 | orchestrator | Sunday 25 May 2025 04:00:51 +0000 (0:00:14.711) 0:00:44.636 ************ 2025-05-25 04:01:53.479889 | orchestrator | 2025-05-25 04:01:53.479899 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-25 04:01:53.479907 | orchestrator | Sunday 25 May 2025 04:00:51 +0000 (0:00:00.063) 0:00:44.700 ************ 2025-05-25 04:01:53.479915 | orchestrator | 2025-05-25 04:01:53.479923 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-25 04:01:53.479931 | orchestrator | Sunday 25 May 2025 04:00:51 +0000 (0:00:00.062) 0:00:44.762 ************ 2025-05-25 04:01:53.479939 | orchestrator | 2025-05-25 04:01:53.479946 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-25 04:01:53.479954 | orchestrator | Sunday 25 May 2025 04:00:51 +0000 (0:00:00.063) 0:00:44.825 ************ 2025-05-25 04:01:53.479962 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:01:53.479970 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:01:53.479978 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:01:53.479986 | orchestrator | 2025-05-25 04:01:53.479994 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:01:53.480002 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-05-25 04:01:53.480010 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-05-25 04:01:53.480018 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-05-25 04:01:53.480026 | orchestrator | 2025-05-25 04:01:53.480034 | orchestrator | 2025-05-25 04:01:53.480042 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:01:53.480049 | orchestrator | Sunday 25 May 2025 04:01:52 +0000 (0:01:00.552) 0:01:45.378 ************ 2025-05-25 04:01:53.480057 | orchestrator | =============================================================================== 2025-05-25 04:01:53.480065 | orchestrator | horizon : Restart horizon container ------------------------------------ 60.55s 2025-05-25 04:01:53.480073 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.71s 2025-05-25 04:01:53.480089 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.24s 2025-05-25 04:01:53.480097 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.05s 2025-05-25 04:01:53.480109 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.03s 2025-05-25 04:01:53.480117 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.02s 2025-05-25 04:01:53.480125 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.57s 2025-05-25 04:01:53.480133 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.52s 2025-05-25 04:01:53.480140 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.51s 2025-05-25 04:01:53.480148 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.34s 2025-05-25 04:01:53.480156 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.04s 2025-05-25 04:01:53.480163 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.02s 2025-05-25 04:01:53.480171 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2025-05-25 04:01:53.480179 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.69s 2025-05-25 04:01:53.480187 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.63s 2025-05-25 04:01:53.480206 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.61s 2025-05-25 04:01:53.480215 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2025-05-25 04:01:53.480228 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.48s 2025-05-25 04:01:53.480243 | orchestrator | horizon : Update policy file name --------------------------------------- 0.48s 2025-05-25 04:01:53.480269 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.46s 2025-05-25 04:01:53.480277 | orchestrator | 2025-05-25 04:01:53 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state STARTED 2025-05-25 04:01:53.480285 | orchestrator | 2025-05-25 04:01:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:56.521652 | orchestrator | 2025-05-25 04:01:56 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:56.523302 | orchestrator | 2025-05-25 04:01:56 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state STARTED 2025-05-25 04:01:56.523342 | orchestrator | 2025-05-25 04:01:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:01:59.575506 | orchestrator | 2025-05-25 04:01:59 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:01:59.577701 | orchestrator | 2025-05-25 04:01:59 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state STARTED 2025-05-25 04:01:59.577766 | orchestrator | 2025-05-25 04:01:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:02.621574 | orchestrator | 2025-05-25 04:02:02 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:02:02.623349 | orchestrator | 2025-05-25 04:02:02 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state STARTED 2025-05-25 04:02:02.623386 | orchestrator | 2025-05-25 04:02:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:05.674101 | orchestrator | 2025-05-25 04:02:05 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:02:05.674959 | orchestrator | 2025-05-25 04:02:05 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state STARTED 2025-05-25 04:02:05.675000 | orchestrator | 2025-05-25 04:02:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:08.727039 | orchestrator | 2025-05-25 04:02:08 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:02:08.728327 | orchestrator | 2025-05-25 04:02:08 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state STARTED 2025-05-25 04:02:08.728394 | orchestrator | 2025-05-25 04:02:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:11.777721 | orchestrator | 2025-05-25 04:02:11 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:02:11.779258 | orchestrator | 2025-05-25 04:02:11 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state STARTED 2025-05-25 04:02:11.779298 | orchestrator | 2025-05-25 04:02:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:14.830456 | orchestrator | 2025-05-25 04:02:14 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:02:14.832660 | orchestrator | 2025-05-25 04:02:14 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state STARTED 2025-05-25 04:02:14.832722 | orchestrator | 2025-05-25 04:02:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:17.882302 | orchestrator | 2025-05-25 04:02:17 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:02:17.884123 | orchestrator | 2025-05-25 04:02:17 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state STARTED 2025-05-25 04:02:17.884215 | orchestrator | 2025-05-25 04:02:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:20.936573 | orchestrator | 2025-05-25 04:02:20 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:02:20.938003 | orchestrator | 2025-05-25 04:02:20 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state STARTED 2025-05-25 04:02:20.938113 | orchestrator | 2025-05-25 04:02:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:23.985989 | orchestrator | 2025-05-25 04:02:23 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:02:23.987026 | orchestrator | 2025-05-25 04:02:23 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state STARTED 2025-05-25 04:02:23.987067 | orchestrator | 2025-05-25 04:02:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:27.038606 | orchestrator | 2025-05-25 04:02:27 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:02:27.041187 | orchestrator | 2025-05-25 04:02:27 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state STARTED 2025-05-25 04:02:27.041245 | orchestrator | 2025-05-25 04:02:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:30.098075 | orchestrator | 2025-05-25 04:02:30 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:02:30.100431 | orchestrator | 2025-05-25 04:02:30 | INFO  | Task 94c50c14-4d37-4996-8509-88a6d36cf54b is in state STARTED 2025-05-25 04:02:30.103908 | orchestrator | 2025-05-25 04:02:30 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:02:30.105134 | orchestrator | 2025-05-25 04:02:30 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:02:30.108392 | orchestrator | 2025-05-25 04:02:30 | INFO  | Task 1793fb62-a55b-4f95-a01d-c56d31d9823a is in state SUCCESS 2025-05-25 04:02:30.108428 | orchestrator | 2025-05-25 04:02:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:33.150101 | orchestrator | 2025-05-25 04:02:33 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:02:33.152163 | orchestrator | 2025-05-25 04:02:33 | INFO  | Task 94c50c14-4d37-4996-8509-88a6d36cf54b is in state STARTED 2025-05-25 04:02:33.154484 | orchestrator | 2025-05-25 04:02:33 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:02:33.156266 | orchestrator | 2025-05-25 04:02:33 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:02:33.156587 | orchestrator | 2025-05-25 04:02:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:36.194509 | orchestrator | 2025-05-25 04:02:36 | INFO  | Task f07d46f4-3360-4bb8-ac88-fd4536361d65 is in state STARTED 2025-05-25 04:02:36.195948 | orchestrator | 2025-05-25 04:02:36 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:02:36.196662 | orchestrator | 2025-05-25 04:02:36 | INFO  | Task 94c50c14-4d37-4996-8509-88a6d36cf54b is in state SUCCESS 2025-05-25 04:02:36.197730 | orchestrator | 2025-05-25 04:02:36 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:02:36.198415 | orchestrator | 2025-05-25 04:02:36 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:02:36.199272 | orchestrator | 2025-05-25 04:02:36 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:02:36.199416 | orchestrator | 2025-05-25 04:02:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:39.247671 | orchestrator | 2025-05-25 04:02:39 | INFO  | Task f07d46f4-3360-4bb8-ac88-fd4536361d65 is in state STARTED 2025-05-25 04:02:39.248031 | orchestrator | 2025-05-25 04:02:39 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:02:39.248805 | orchestrator | 2025-05-25 04:02:39 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:02:39.249623 | orchestrator | 2025-05-25 04:02:39 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:02:39.250802 | orchestrator | 2025-05-25 04:02:39 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:02:39.250871 | orchestrator | 2025-05-25 04:02:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:42.304817 | orchestrator | 2025-05-25 04:02:42 | INFO  | Task f07d46f4-3360-4bb8-ac88-fd4536361d65 is in state STARTED 2025-05-25 04:02:42.306323 | orchestrator | 2025-05-25 04:02:42 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:02:42.307297 | orchestrator | 2025-05-25 04:02:42 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:02:42.308317 | orchestrator | 2025-05-25 04:02:42 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:02:42.309185 | orchestrator | 2025-05-25 04:02:42 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:02:42.309414 | orchestrator | 2025-05-25 04:02:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:45.366219 | orchestrator | 2025-05-25 04:02:45 | INFO  | Task f07d46f4-3360-4bb8-ac88-fd4536361d65 is in state STARTED 2025-05-25 04:02:45.366531 | orchestrator | 2025-05-25 04:02:45 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:02:45.368863 | orchestrator | 2025-05-25 04:02:45 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:02:45.368899 | orchestrator | 2025-05-25 04:02:45 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:02:45.369937 | orchestrator | 2025-05-25 04:02:45 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state STARTED 2025-05-25 04:02:45.369982 | orchestrator | 2025-05-25 04:02:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:48.404370 | orchestrator | 2025-05-25 04:02:48 | INFO  | Task f07d46f4-3360-4bb8-ac88-fd4536361d65 is in state STARTED 2025-05-25 04:02:48.406208 | orchestrator | 2025-05-25 04:02:48 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:02:48.408056 | orchestrator | 2025-05-25 04:02:48 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:02:48.410060 | orchestrator | 2025-05-25 04:02:48 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:02:48.413020 | orchestrator | 2025-05-25 04:02:48 | INFO  | Task 70e0c9ed-37fa-4303-84bd-948b4f696f4e is in state SUCCESS 2025-05-25 04:02:48.417823 | orchestrator | 2025-05-25 04:02:48.417879 | orchestrator | 2025-05-25 04:02:48.417890 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-25 04:02:48.417900 | orchestrator | 2025-05-25 04:02:48.417907 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-25 04:02:48.417915 | orchestrator | Sunday 25 May 2025 04:01:39 +0000 (0:00:00.226) 0:00:00.226 ************ 2025-05-25 04:02:48.417924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-25 04:02:48.417933 | orchestrator | 2025-05-25 04:02:48.417941 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-25 04:02:48.417948 | orchestrator | Sunday 25 May 2025 04:01:39 +0000 (0:00:00.214) 0:00:00.441 ************ 2025-05-25 04:02:48.417956 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-25 04:02:48.417963 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-25 04:02:48.417971 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-25 04:02:48.417978 | orchestrator | 2025-05-25 04:02:48.417986 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-25 04:02:48.417993 | orchestrator | Sunday 25 May 2025 04:01:40 +0000 (0:00:01.176) 0:00:01.617 ************ 2025-05-25 04:02:48.418000 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-25 04:02:48.418008 | orchestrator | 2025-05-25 04:02:48.418052 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-25 04:02:48.418060 | orchestrator | Sunday 25 May 2025 04:01:41 +0000 (0:00:01.081) 0:00:02.698 ************ 2025-05-25 04:02:48.418068 | orchestrator | changed: [testbed-manager] 2025-05-25 04:02:48.418075 | orchestrator | 2025-05-25 04:02:48.418083 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-25 04:02:48.418090 | orchestrator | Sunday 25 May 2025 04:01:42 +0000 (0:00:00.975) 0:00:03.673 ************ 2025-05-25 04:02:48.418097 | orchestrator | changed: [testbed-manager] 2025-05-25 04:02:48.418105 | orchestrator | 2025-05-25 04:02:48.418112 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-25 04:02:48.418119 | orchestrator | Sunday 25 May 2025 04:01:43 +0000 (0:00:00.746) 0:00:04.420 ************ 2025-05-25 04:02:48.418126 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-25 04:02:48.418133 | orchestrator | ok: [testbed-manager] 2025-05-25 04:02:48.418141 | orchestrator | 2025-05-25 04:02:48.418148 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-25 04:02:48.418155 | orchestrator | Sunday 25 May 2025 04:02:18 +0000 (0:00:35.353) 0:00:39.773 ************ 2025-05-25 04:02:48.418441 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-25 04:02:48.418449 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-25 04:02:48.418457 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-25 04:02:48.418464 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-25 04:02:48.418471 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-25 04:02:48.418478 | orchestrator | 2025-05-25 04:02:48.418486 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-25 04:02:48.418506 | orchestrator | Sunday 25 May 2025 04:02:22 +0000 (0:00:03.975) 0:00:43.748 ************ 2025-05-25 04:02:48.418531 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-25 04:02:48.418539 | orchestrator | 2025-05-25 04:02:48.418546 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-25 04:02:48.418554 | orchestrator | Sunday 25 May 2025 04:02:23 +0000 (0:00:00.421) 0:00:44.170 ************ 2025-05-25 04:02:48.418561 | orchestrator | skipping: [testbed-manager] 2025-05-25 04:02:48.418568 | orchestrator | 2025-05-25 04:02:48.418576 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-25 04:02:48.418583 | orchestrator | Sunday 25 May 2025 04:02:23 +0000 (0:00:00.139) 0:00:44.310 ************ 2025-05-25 04:02:48.418590 | orchestrator | skipping: [testbed-manager] 2025-05-25 04:02:48.418597 | orchestrator | 2025-05-25 04:02:48.418604 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-25 04:02:48.418612 | orchestrator | Sunday 25 May 2025 04:02:23 +0000 (0:00:00.278) 0:00:44.588 ************ 2025-05-25 04:02:48.418619 | orchestrator | changed: [testbed-manager] 2025-05-25 04:02:48.418626 | orchestrator | 2025-05-25 04:02:48.418633 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-25 04:02:48.418641 | orchestrator | Sunday 25 May 2025 04:02:25 +0000 (0:00:01.708) 0:00:46.297 ************ 2025-05-25 04:02:48.418648 | orchestrator | changed: [testbed-manager] 2025-05-25 04:02:48.418655 | orchestrator | 2025-05-25 04:02:48.418663 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-25 04:02:48.418670 | orchestrator | Sunday 25 May 2025 04:02:26 +0000 (0:00:00.677) 0:00:46.974 ************ 2025-05-25 04:02:48.418677 | orchestrator | changed: [testbed-manager] 2025-05-25 04:02:48.418684 | orchestrator | 2025-05-25 04:02:48.418692 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-25 04:02:48.418699 | orchestrator | Sunday 25 May 2025 04:02:26 +0000 (0:00:00.576) 0:00:47.550 ************ 2025-05-25 04:02:48.418706 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-25 04:02:48.418714 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-25 04:02:48.418721 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-25 04:02:48.418728 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-25 04:02:48.418736 | orchestrator | 2025-05-25 04:02:48.418743 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:02:48.418750 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 04:02:48.418759 | orchestrator | 2025-05-25 04:02:48.418802 | orchestrator | 2025-05-25 04:02:48.418821 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:02:48.418829 | orchestrator | Sunday 25 May 2025 04:02:28 +0000 (0:00:01.428) 0:00:48.979 ************ 2025-05-25 04:02:48.418836 | orchestrator | =============================================================================== 2025-05-25 04:02:48.418843 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 35.35s 2025-05-25 04:02:48.418850 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.98s 2025-05-25 04:02:48.418858 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.71s 2025-05-25 04:02:48.418865 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.43s 2025-05-25 04:02:48.418872 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.18s 2025-05-25 04:02:48.418880 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.08s 2025-05-25 04:02:48.418887 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.98s 2025-05-25 04:02:48.418894 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.75s 2025-05-25 04:02:48.418901 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.68s 2025-05-25 04:02:48.418909 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.58s 2025-05-25 04:02:48.418923 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.42s 2025-05-25 04:02:48.418931 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.28s 2025-05-25 04:02:48.418938 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2025-05-25 04:02:48.418945 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2025-05-25 04:02:48.418953 | orchestrator | 2025-05-25 04:02:48.418960 | orchestrator | 2025-05-25 04:02:48.418967 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:02:48.418975 | orchestrator | 2025-05-25 04:02:48.418982 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:02:48.418989 | orchestrator | Sunday 25 May 2025 04:02:32 +0000 (0:00:00.174) 0:00:00.174 ************ 2025-05-25 04:02:48.418996 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:02:48.419004 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:02:48.419011 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:02:48.419018 | orchestrator | 2025-05-25 04:02:48.419025 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:02:48.419033 | orchestrator | Sunday 25 May 2025 04:02:32 +0000 (0:00:00.250) 0:00:00.425 ************ 2025-05-25 04:02:48.419040 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-25 04:02:48.419049 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-25 04:02:48.419058 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-25 04:02:48.419066 | orchestrator | 2025-05-25 04:02:48.419075 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-25 04:02:48.419083 | orchestrator | 2025-05-25 04:02:48.419091 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-25 04:02:48.419099 | orchestrator | Sunday 25 May 2025 04:02:33 +0000 (0:00:00.545) 0:00:00.970 ************ 2025-05-25 04:02:48.419172 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:02:48.419182 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:02:48.419191 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:02:48.419199 | orchestrator | 2025-05-25 04:02:48.419267 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:02:48.419278 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:02:48.419287 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:02:48.419296 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:02:48.419305 | orchestrator | 2025-05-25 04:02:48.419313 | orchestrator | 2025-05-25 04:02:48.419322 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:02:48.419331 | orchestrator | Sunday 25 May 2025 04:02:33 +0000 (0:00:00.626) 0:00:01.596 ************ 2025-05-25 04:02:48.419339 | orchestrator | =============================================================================== 2025-05-25 04:02:48.419348 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.63s 2025-05-25 04:02:48.419356 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-05-25 04:02:48.419365 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2025-05-25 04:02:48.419373 | orchestrator | 2025-05-25 04:02:48.419381 | orchestrator | 2025-05-25 04:02:48.419389 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:02:48.419396 | orchestrator | 2025-05-25 04:02:48.419403 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:02:48.419410 | orchestrator | Sunday 25 May 2025 04:00:06 +0000 (0:00:00.212) 0:00:00.212 ************ 2025-05-25 04:02:48.419417 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:02:48.419425 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:02:48.419432 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:02:48.419446 | orchestrator | 2025-05-25 04:02:48.419453 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:02:48.419460 | orchestrator | Sunday 25 May 2025 04:00:07 +0000 (0:00:00.246) 0:00:00.459 ************ 2025-05-25 04:02:48.419467 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-25 04:02:48.419475 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-25 04:02:48.419482 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-25 04:02:48.419489 | orchestrator | 2025-05-25 04:02:48.419497 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-25 04:02:48.419504 | orchestrator | 2025-05-25 04:02:48.419622 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-25 04:02:48.419633 | orchestrator | Sunday 25 May 2025 04:00:07 +0000 (0:00:00.335) 0:00:00.795 ************ 2025-05-25 04:02:48.419641 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:02:48.419648 | orchestrator | 2025-05-25 04:02:48.419655 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-25 04:02:48.419663 | orchestrator | Sunday 25 May 2025 04:00:07 +0000 (0:00:00.450) 0:00:01.246 ************ 2025-05-25 04:02:48.419674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.419691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.419700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.419717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 04:02:48.419750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 04:02:48.419793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 04:02:48.419809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.419827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.419841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.419856 | orchestrator | 2025-05-25 04:02:48.419864 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-25 04:02:48.419872 | orchestrator | Sunday 25 May 2025 04:00:09 +0000 (0:00:01.560) 0:00:02.807 ************ 2025-05-25 04:02:48.419879 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-25 04:02:48.419887 | orchestrator | 2025-05-25 04:02:48.419894 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-25 04:02:48.419901 | orchestrator | Sunday 25 May 2025 04:00:10 +0000 (0:00:00.746) 0:00:03.553 ************ 2025-05-25 04:02:48.419909 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:02:48.419916 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:02:48.419923 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:02:48.419930 | orchestrator | 2025-05-25 04:02:48.419938 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-25 04:02:48.419945 | orchestrator | Sunday 25 May 2025 04:00:10 +0000 (0:00:00.479) 0:00:04.033 ************ 2025-05-25 04:02:48.419952 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 04:02:48.419959 | orchestrator | 2025-05-25 04:02:48.419967 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-25 04:02:48.419974 | orchestrator | Sunday 25 May 2025 04:00:11 +0000 (0:00:00.641) 0:00:04.675 ************ 2025-05-25 04:02:48.419981 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:02:48.419989 | orchestrator | 2025-05-25 04:02:48.420019 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-25 04:02:48.420027 | orchestrator | Sunday 25 May 2025 04:00:11 +0000 (0:00:00.485) 0:00:05.160 ************ 2025-05-25 04:02:48.420036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.420044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.420057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.420070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 04:02:48.420083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 04:02:48.420092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 04:02:48.420099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.420107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.420127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.420135 | orchestrator | 2025-05-25 04:02:48.420143 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-25 04:02:48.420150 | orchestrator | Sunday 25 May 2025 04:00:15 +0000 (0:00:03.279) 0:00:08.439 ************ 2025-05-25 04:02:48.420159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-25 04:02:48.420175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 04:02:48.420183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 04:02:48.420191 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:02:48.420199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-25 04:02:48.420215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 04:02:48.420223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 04:02:48.420231 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:02:48.420245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-25 04:02:48.420255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 04:02:48.420264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 04:02:48.420272 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:02:48.420280 | orchestrator | 2025-05-25 04:02:48.420288 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-25 04:02:48.420303 | orchestrator | Sunday 25 May 2025 04:00:15 +0000 (0:00:00.567) 0:00:09.007 ************ 2025-05-25 04:02:48.420315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-25 04:02:48.420325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 04:02:48.420334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 04:02:48.420342 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:02:48.420357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'grou2025-05-25 04:02:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:48.420367 | orchestrator | p': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-25 04:02:48.420377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 04:02:48.420391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 04:02:48.420400 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:02:48.420412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-25 04:02:48.420422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 04:02:48.420437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 04:02:48.420446 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:02:48.420455 | orchestrator | 2025-05-25 04:02:48.420463 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-25 04:02:48.420472 | orchestrator | Sunday 25 May 2025 04:00:16 +0000 (0:00:00.719) 0:00:09.726 ************ 2025-05-25 04:02:48.420481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.420498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.420508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.420522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 04:02:48.420531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 04:02:48.420540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 04:02:48.420553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.420564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.420572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.420579 | orchestrator | 2025-05-25 04:02:48.420587 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-25 04:02:48.420594 | orchestrator | Sunday 25 May 2025 04:00:19 +0000 (0:00:03.441) 0:00:13.167 ************ 2025-05-25 04:02:48.420606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.420615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 04:02:48.420627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.420638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 04:02:48.420646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.420658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 04:02:48.420667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.420678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.420686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.420693 | orchestrator | 2025-05-25 04:02:48.420701 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-25 04:02:48.420708 | orchestrator | Sunday 25 May 2025 04:00:24 +0000 (0:00:05.009) 0:00:18.177 ************ 2025-05-25 04:02:48.420715 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:02:48.420723 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:02:48.420730 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:02:48.420738 | orchestrator | 2025-05-25 04:02:48.420745 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-25 04:02:48.420752 | orchestrator | Sunday 25 May 2025 04:00:26 +0000 (0:00:01.370) 0:00:19.547 ************ 2025-05-25 04:02:48.420803 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:02:48.420813 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:02:48.420820 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:02:48.420828 | orchestrator | 2025-05-25 04:02:48.420835 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-25 04:02:48.420842 | orchestrator | Sunday 25 May 2025 04:00:26 +0000 (0:00:00.519) 0:00:20.067 ************ 2025-05-25 04:02:48.420850 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:02:48.420857 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:02:48.420864 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:02:48.420871 | orchestrator | 2025-05-25 04:02:48.420878 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-25 04:02:48.420886 | orchestrator | Sunday 25 May 2025 04:00:27 +0000 (0:00:00.512) 0:00:20.579 ************ 2025-05-25 04:02:48.420893 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:02:48.420900 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:02:48.420907 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:02:48.420915 | orchestrator | 2025-05-25 04:02:48.420922 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-25 04:02:48.420929 | orchestrator | Sunday 25 May 2025 04:00:27 +0000 (0:00:00.313) 0:00:20.893 ************ 2025-05-25 04:02:48.420942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.420958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 04:02:48.420967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.420975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 04:02:48.420986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.420994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 04:02:48.421011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.421019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.421027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.421034 | orchestrator | 2025-05-25 04:02:48.421042 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-25 04:02:48.421049 | orchestrator | Sunday 25 May 2025 04:00:29 +0000 (0:00:02.221) 0:00:23.114 ************ 2025-05-25 04:02:48.421056 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:02:48.421064 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:02:48.421071 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:02:48.421078 | orchestrator | 2025-05-25 04:02:48.421085 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-25 04:02:48.421092 | orchestrator | Sunday 25 May 2025 04:00:30 +0000 (0:00:00.279) 0:00:23.394 ************ 2025-05-25 04:02:48.421100 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-25 04:02:48.421107 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-25 04:02:48.421118 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-25 04:02:48.421125 | orchestrator | 2025-05-25 04:02:48.421133 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-25 04:02:48.421140 | orchestrator | Sunday 25 May 2025 04:00:32 +0000 (0:00:02.026) 0:00:25.421 ************ 2025-05-25 04:02:48.421147 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 04:02:48.421154 | orchestrator | 2025-05-25 04:02:48.421162 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-25 04:02:48.421169 | orchestrator | Sunday 25 May 2025 04:00:32 +0000 (0:00:00.883) 0:00:26.304 ************ 2025-05-25 04:02:48.421176 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:02:48.421183 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:02:48.421190 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:02:48.421202 | orchestrator | 2025-05-25 04:02:48.421209 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-25 04:02:48.421217 | orchestrator | Sunday 25 May 2025 04:00:33 +0000 (0:00:00.518) 0:00:26.823 ************ 2025-05-25 04:02:48.421224 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 04:02:48.421231 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-25 04:02:48.421238 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-25 04:02:48.421246 | orchestrator | 2025-05-25 04:02:48.421253 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-25 04:02:48.421260 | orchestrator | Sunday 25 May 2025 04:00:34 +0000 (0:00:00.977) 0:00:27.801 ************ 2025-05-25 04:02:48.421267 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:02:48.421275 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:02:48.421282 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:02:48.421289 | orchestrator | 2025-05-25 04:02:48.421296 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-25 04:02:48.421304 | orchestrator | Sunday 25 May 2025 04:00:34 +0000 (0:00:00.282) 0:00:28.083 ************ 2025-05-25 04:02:48.421311 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-25 04:02:48.421318 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-25 04:02:48.421325 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-25 04:02:48.421332 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-25 04:02:48.421344 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-25 04:02:48.421351 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-25 04:02:48.421359 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-25 04:02:48.421366 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-25 04:02:48.421373 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-25 04:02:48.421380 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-25 04:02:48.421387 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-25 04:02:48.421395 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-25 04:02:48.421402 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-25 04:02:48.421409 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-25 04:02:48.421416 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-25 04:02:48.421423 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-25 04:02:48.421430 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-25 04:02:48.421438 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-25 04:02:48.421445 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-25 04:02:48.421452 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-25 04:02:48.421459 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-25 04:02:48.421467 | orchestrator | 2025-05-25 04:02:48.421474 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-25 04:02:48.421481 | orchestrator | Sunday 25 May 2025 04:00:43 +0000 (0:00:08.405) 0:00:36.489 ************ 2025-05-25 04:02:48.421494 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-25 04:02:48.421501 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-25 04:02:48.421508 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-25 04:02:48.421515 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-25 04:02:48.421523 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-25 04:02:48.421533 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-25 04:02:48.421541 | orchestrator | 2025-05-25 04:02:48.421548 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-25 04:02:48.421555 | orchestrator | Sunday 25 May 2025 04:00:45 +0000 (0:00:02.440) 0:00:38.929 ************ 2025-05-25 04:02:48.421563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.421577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.421585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-25 04:02:48.421599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 04:02:48.421610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 04:02:48.421618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 04:02:48.421625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.421637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.421645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 04:02:48.421652 | orchestrator | 2025-05-25 04:02:48.421659 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-25 04:02:48.421667 | orchestrator | Sunday 25 May 2025 04:00:47 +0000 (0:00:02.179) 0:00:41.109 ************ 2025-05-25 04:02:48.421679 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:02:48.421686 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:02:48.421693 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:02:48.421701 | orchestrator | 2025-05-25 04:02:48.421708 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-25 04:02:48.421715 | orchestrator | Sunday 25 May 2025 04:00:48 +0000 (0:00:00.311) 0:00:41.420 ************ 2025-05-25 04:02:48.421722 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:02:48.421729 | orchestrator | 2025-05-25 04:02:48.421736 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-25 04:02:48.421744 | orchestrator | Sunday 25 May 2025 04:00:50 +0000 (0:00:02.142) 0:00:43.563 ************ 2025-05-25 04:02:48.421751 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:02:48.421758 | orchestrator | 2025-05-25 04:02:48.421777 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-25 04:02:48.421785 | orchestrator | Sunday 25 May 2025 04:00:52 +0000 (0:00:02.429) 0:00:45.993 ************ 2025-05-25 04:02:48.421792 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:02:48.421799 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:02:48.421807 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:02:48.421814 | orchestrator | 2025-05-25 04:02:48.421821 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-25 04:02:48.421828 | orchestrator | Sunday 25 May 2025 04:00:53 +0000 (0:00:00.934) 0:00:46.927 ************ 2025-05-25 04:02:48.421835 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:02:48.421843 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:02:48.421850 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:02:48.421857 | orchestrator | 2025-05-25 04:02:48.421864 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-25 04:02:48.421875 | orchestrator | Sunday 25 May 2025 04:00:53 +0000 (0:00:00.349) 0:00:47.277 ************ 2025-05-25 04:02:48.421883 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:02:48.421890 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:02:48.421898 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:02:48.421905 | orchestrator | 2025-05-25 04:02:48.421912 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-25 04:02:48.421919 | orchestrator | Sunday 25 May 2025 04:00:54 +0000 (0:00:00.326) 0:00:47.603 ************ 2025-05-25 04:02:48.421926 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:02:48.421934 | orchestrator | 2025-05-25 04:02:48.421941 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-25 04:02:48.421948 | orchestrator | Sunday 25 May 2025 04:01:07 +0000 (0:00:13.182) 0:01:00.786 ************ 2025-05-25 04:02:48.421955 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:02:48.421963 | orchestrator | 2025-05-25 04:02:48.421970 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-25 04:02:48.421977 | orchestrator | Sunday 25 May 2025 04:01:16 +0000 (0:00:09.244) 0:01:10.030 ************ 2025-05-25 04:02:48.421984 | orchestrator | 2025-05-25 04:02:48.421991 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-25 04:02:48.421998 | orchestrator | Sunday 25 May 2025 04:01:16 +0000 (0:00:00.231) 0:01:10.262 ************ 2025-05-25 04:02:48.422006 | orchestrator | 2025-05-25 04:02:48.422013 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-25 04:02:48.422047 | orchestrator | Sunday 25 May 2025 04:01:16 +0000 (0:00:00.063) 0:01:10.326 ************ 2025-05-25 04:02:48.422055 | orchestrator | 2025-05-25 04:02:48.422062 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-25 04:02:48.422070 | orchestrator | Sunday 25 May 2025 04:01:17 +0000 (0:00:00.081) 0:01:10.408 ************ 2025-05-25 04:02:48.422077 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:02:48.422084 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:02:48.422091 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:02:48.422099 | orchestrator | 2025-05-25 04:02:48.422106 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-25 04:02:48.422119 | orchestrator | Sunday 25 May 2025 04:01:42 +0000 (0:00:25.364) 0:01:35.772 ************ 2025-05-25 04:02:48.422127 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:02:48.422134 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:02:48.422141 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:02:48.422148 | orchestrator | 2025-05-25 04:02:48.422156 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-25 04:02:48.422167 | orchestrator | Sunday 25 May 2025 04:01:52 +0000 (0:00:09.880) 0:01:45.652 ************ 2025-05-25 04:02:48.422175 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:02:48.422182 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:02:48.422189 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:02:48.422197 | orchestrator | 2025-05-25 04:02:48.422204 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-25 04:02:48.422211 | orchestrator | Sunday 25 May 2025 04:01:58 +0000 (0:00:06.497) 0:01:52.150 ************ 2025-05-25 04:02:48.422219 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:02:48.422226 | orchestrator | 2025-05-25 04:02:48.422233 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-25 04:02:48.422240 | orchestrator | Sunday 25 May 2025 04:01:59 +0000 (0:00:00.752) 0:01:52.903 ************ 2025-05-25 04:02:48.422248 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:02:48.422255 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:02:48.422262 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:02:48.422269 | orchestrator | 2025-05-25 04:02:48.422277 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-25 04:02:48.422284 | orchestrator | Sunday 25 May 2025 04:02:00 +0000 (0:00:00.677) 0:01:53.580 ************ 2025-05-25 04:02:48.422291 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:02:48.422298 | orchestrator | 2025-05-25 04:02:48.422305 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-25 04:02:48.422313 | orchestrator | Sunday 25 May 2025 04:02:01 +0000 (0:00:01.762) 0:01:55.342 ************ 2025-05-25 04:02:48.422320 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-25 04:02:48.422327 | orchestrator | 2025-05-25 04:02:48.422335 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-25 04:02:48.422342 | orchestrator | Sunday 25 May 2025 04:02:11 +0000 (0:00:09.194) 0:02:04.537 ************ 2025-05-25 04:02:48.422349 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-25 04:02:48.422356 | orchestrator | 2025-05-25 04:02:48.422363 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-25 04:02:48.422371 | orchestrator | Sunday 25 May 2025 04:02:30 +0000 (0:00:19.767) 0:02:24.304 ************ 2025-05-25 04:02:48.422378 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-25 04:02:48.422386 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-25 04:02:48.422393 | orchestrator | 2025-05-25 04:02:48.422400 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-25 04:02:48.422407 | orchestrator | Sunday 25 May 2025 04:02:43 +0000 (0:00:12.089) 0:02:36.394 ************ 2025-05-25 04:02:48.422415 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:02:48.422422 | orchestrator | 2025-05-25 04:02:48.422429 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-25 04:02:48.422437 | orchestrator | Sunday 25 May 2025 04:02:43 +0000 (0:00:00.334) 0:02:36.729 ************ 2025-05-25 04:02:48.422444 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:02:48.422451 | orchestrator | 2025-05-25 04:02:48.422458 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-25 04:02:48.422465 | orchestrator | Sunday 25 May 2025 04:02:43 +0000 (0:00:00.133) 0:02:36.862 ************ 2025-05-25 04:02:48.422472 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:02:48.422487 | orchestrator | 2025-05-25 04:02:48.422499 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-25 04:02:48.422506 | orchestrator | Sunday 25 May 2025 04:02:43 +0000 (0:00:00.115) 0:02:36.978 ************ 2025-05-25 04:02:48.422513 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:02:48.422521 | orchestrator | 2025-05-25 04:02:48.422528 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-25 04:02:48.422536 | orchestrator | Sunday 25 May 2025 04:02:43 +0000 (0:00:00.321) 0:02:37.299 ************ 2025-05-25 04:02:48.422543 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:02:48.422550 | orchestrator | 2025-05-25 04:02:48.422557 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-25 04:02:48.422565 | orchestrator | Sunday 25 May 2025 04:02:47 +0000 (0:00:03.047) 0:02:40.347 ************ 2025-05-25 04:02:48.422572 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:02:48.422579 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:02:48.422597 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:02:48.422605 | orchestrator | 2025-05-25 04:02:48.422612 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:02:48.422619 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-25 04:02:48.422628 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-25 04:02:48.422635 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-25 04:02:48.422651 | orchestrator | 2025-05-25 04:02:48.422658 | orchestrator | 2025-05-25 04:02:48.422666 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:02:48.422673 | orchestrator | Sunday 25 May 2025 04:02:47 +0000 (0:00:00.852) 0:02:41.199 ************ 2025-05-25 04:02:48.422681 | orchestrator | =============================================================================== 2025-05-25 04:02:48.422688 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 25.36s 2025-05-25 04:02:48.422696 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.77s 2025-05-25 04:02:48.422703 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.18s 2025-05-25 04:02:48.422715 | orchestrator | service-ks-register : keystone | Creating endpoints -------------------- 12.09s 2025-05-25 04:02:48.422723 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.88s 2025-05-25 04:02:48.422730 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.24s 2025-05-25 04:02:48.422738 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.19s 2025-05-25 04:02:48.422745 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.41s 2025-05-25 04:02:48.422752 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.50s 2025-05-25 04:02:48.422760 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.01s 2025-05-25 04:02:48.422803 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.44s 2025-05-25 04:02:48.422811 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.28s 2025-05-25 04:02:48.422819 | orchestrator | keystone : Creating default user role ----------------------------------- 3.05s 2025-05-25 04:02:48.422826 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.44s 2025-05-25 04:02:48.422833 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.43s 2025-05-25 04:02:48.422840 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.22s 2025-05-25 04:02:48.422848 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.18s 2025-05-25 04:02:48.422855 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.14s 2025-05-25 04:02:48.422868 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.03s 2025-05-25 04:02:48.422875 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.76s 2025-05-25 04:02:51.498426 | orchestrator | 2025-05-25 04:02:51 | INFO  | Task f07d46f4-3360-4bb8-ac88-fd4536361d65 is in state STARTED 2025-05-25 04:02:51.498529 | orchestrator | 2025-05-25 04:02:51 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:02:51.500109 | orchestrator | 2025-05-25 04:02:51 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:02:51.502093 | orchestrator | 2025-05-25 04:02:51 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:02:51.503693 | orchestrator | 2025-05-25 04:02:51 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:02:51.504386 | orchestrator | 2025-05-25 04:02:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:54.557932 | orchestrator | 2025-05-25 04:02:54 | INFO  | Task f07d46f4-3360-4bb8-ac88-fd4536361d65 is in state STARTED 2025-05-25 04:02:54.558258 | orchestrator | 2025-05-25 04:02:54 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:02:54.558920 | orchestrator | 2025-05-25 04:02:54 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:02:54.560570 | orchestrator | 2025-05-25 04:02:54 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:02:54.562341 | orchestrator | 2025-05-25 04:02:54 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:02:54.562394 | orchestrator | 2025-05-25 04:02:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:02:57.608314 | orchestrator | 2025-05-25 04:02:57 | INFO  | Task f07d46f4-3360-4bb8-ac88-fd4536361d65 is in state STARTED 2025-05-25 04:02:57.612665 | orchestrator | 2025-05-25 04:02:57 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:02:57.613993 | orchestrator | 2025-05-25 04:02:57 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:02:57.615053 | orchestrator | 2025-05-25 04:02:57 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:02:57.617829 | orchestrator | 2025-05-25 04:02:57 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:02:57.617904 | orchestrator | 2025-05-25 04:02:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:00.652220 | orchestrator | 2025-05-25 04:03:00 | INFO  | Task f07d46f4-3360-4bb8-ac88-fd4536361d65 is in state STARTED 2025-05-25 04:03:00.652348 | orchestrator | 2025-05-25 04:03:00 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:00.653849 | orchestrator | 2025-05-25 04:03:00 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:00.653896 | orchestrator | 2025-05-25 04:03:00 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:00.654547 | orchestrator | 2025-05-25 04:03:00 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:00.654572 | orchestrator | 2025-05-25 04:03:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:03.683642 | orchestrator | 2025-05-25 04:03:03 | INFO  | Task f07d46f4-3360-4bb8-ac88-fd4536361d65 is in state STARTED 2025-05-25 04:03:03.685430 | orchestrator | 2025-05-25 04:03:03 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:03.685822 | orchestrator | 2025-05-25 04:03:03 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:03.689575 | orchestrator | 2025-05-25 04:03:03 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:03.693802 | orchestrator | 2025-05-25 04:03:03 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:03.693853 | orchestrator | 2025-05-25 04:03:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:06.712864 | orchestrator | 2025-05-25 04:03:06 | INFO  | Task f07d46f4-3360-4bb8-ac88-fd4536361d65 is in state STARTED 2025-05-25 04:03:06.713567 | orchestrator | 2025-05-25 04:03:06 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:06.713615 | orchestrator | 2025-05-25 04:03:06 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:06.714814 | orchestrator | 2025-05-25 04:03:06 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:06.715573 | orchestrator | 2025-05-25 04:03:06 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:06.715599 | orchestrator | 2025-05-25 04:03:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:09.759824 | orchestrator | 2025-05-25 04:03:09 | INFO  | Task f07d46f4-3360-4bb8-ac88-fd4536361d65 is in state STARTED 2025-05-25 04:03:09.763045 | orchestrator | 2025-05-25 04:03:09 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:09.766377 | orchestrator | 2025-05-25 04:03:09 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:09.767830 | orchestrator | 2025-05-25 04:03:09 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:09.768695 | orchestrator | 2025-05-25 04:03:09 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:09.768724 | orchestrator | 2025-05-25 04:03:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:12.798522 | orchestrator | 2025-05-25 04:03:12 | INFO  | Task f07d46f4-3360-4bb8-ac88-fd4536361d65 is in state SUCCESS 2025-05-25 04:03:12.798623 | orchestrator | 2025-05-25 04:03:12 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:12.798673 | orchestrator | 2025-05-25 04:03:12 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:12.799233 | orchestrator | 2025-05-25 04:03:12 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:12.799468 | orchestrator | 2025-05-25 04:03:12 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:12.800014 | orchestrator | 2025-05-25 04:03:12 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:03:12.800039 | orchestrator | 2025-05-25 04:03:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:15.830681 | orchestrator | 2025-05-25 04:03:15 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:15.832228 | orchestrator | 2025-05-25 04:03:15 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:15.833273 | orchestrator | 2025-05-25 04:03:15 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:15.835476 | orchestrator | 2025-05-25 04:03:15 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:15.836560 | orchestrator | 2025-05-25 04:03:15 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:03:15.836584 | orchestrator | 2025-05-25 04:03:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:18.872510 | orchestrator | 2025-05-25 04:03:18 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:18.872849 | orchestrator | 2025-05-25 04:03:18 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:18.874563 | orchestrator | 2025-05-25 04:03:18 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:18.875320 | orchestrator | 2025-05-25 04:03:18 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:18.876598 | orchestrator | 2025-05-25 04:03:18 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:03:18.876689 | orchestrator | 2025-05-25 04:03:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:21.906489 | orchestrator | 2025-05-25 04:03:21 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:21.906913 | orchestrator | 2025-05-25 04:03:21 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:21.907296 | orchestrator | 2025-05-25 04:03:21 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:21.907933 | orchestrator | 2025-05-25 04:03:21 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:21.908503 | orchestrator | 2025-05-25 04:03:21 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:03:21.908533 | orchestrator | 2025-05-25 04:03:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:24.950399 | orchestrator | 2025-05-25 04:03:24 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:24.950510 | orchestrator | 2025-05-25 04:03:24 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:24.952199 | orchestrator | 2025-05-25 04:03:24 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:24.953558 | orchestrator | 2025-05-25 04:03:24 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:24.961374 | orchestrator | 2025-05-25 04:03:24 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:03:24.961434 | orchestrator | 2025-05-25 04:03:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:27.990143 | orchestrator | 2025-05-25 04:03:27 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:27.991925 | orchestrator | 2025-05-25 04:03:27 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:27.992550 | orchestrator | 2025-05-25 04:03:27 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:27.993985 | orchestrator | 2025-05-25 04:03:27 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:27.994459 | orchestrator | 2025-05-25 04:03:27 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:03:27.994568 | orchestrator | 2025-05-25 04:03:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:31.043452 | orchestrator | 2025-05-25 04:03:31 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:31.045242 | orchestrator | 2025-05-25 04:03:31 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:31.045891 | orchestrator | 2025-05-25 04:03:31 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:31.049856 | orchestrator | 2025-05-25 04:03:31 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:31.050235 | orchestrator | 2025-05-25 04:03:31 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:03:31.050268 | orchestrator | 2025-05-25 04:03:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:34.077073 | orchestrator | 2025-05-25 04:03:34 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:34.077179 | orchestrator | 2025-05-25 04:03:34 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:34.077196 | orchestrator | 2025-05-25 04:03:34 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:34.077619 | orchestrator | 2025-05-25 04:03:34 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:34.078348 | orchestrator | 2025-05-25 04:03:34 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:03:34.078373 | orchestrator | 2025-05-25 04:03:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:37.109983 | orchestrator | 2025-05-25 04:03:37 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:37.111016 | orchestrator | 2025-05-25 04:03:37 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:37.111597 | orchestrator | 2025-05-25 04:03:37 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:37.112316 | orchestrator | 2025-05-25 04:03:37 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:37.113415 | orchestrator | 2025-05-25 04:03:37 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:03:37.113440 | orchestrator | 2025-05-25 04:03:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:40.150540 | orchestrator | 2025-05-25 04:03:40 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:40.150812 | orchestrator | 2025-05-25 04:03:40 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:40.150866 | orchestrator | 2025-05-25 04:03:40 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:40.151390 | orchestrator | 2025-05-25 04:03:40 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:40.152140 | orchestrator | 2025-05-25 04:03:40 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:03:40.152168 | orchestrator | 2025-05-25 04:03:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:43.181937 | orchestrator | 2025-05-25 04:03:43 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:43.182227 | orchestrator | 2025-05-25 04:03:43 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:43.183042 | orchestrator | 2025-05-25 04:03:43 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:43.183559 | orchestrator | 2025-05-25 04:03:43 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:43.184420 | orchestrator | 2025-05-25 04:03:43 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:03:43.184462 | orchestrator | 2025-05-25 04:03:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:46.220169 | orchestrator | 2025-05-25 04:03:46 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:46.220383 | orchestrator | 2025-05-25 04:03:46 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:46.221091 | orchestrator | 2025-05-25 04:03:46 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:46.221825 | orchestrator | 2025-05-25 04:03:46 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:46.222853 | orchestrator | 2025-05-25 04:03:46 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:03:46.222946 | orchestrator | 2025-05-25 04:03:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:49.253228 | orchestrator | 2025-05-25 04:03:49 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:49.253348 | orchestrator | 2025-05-25 04:03:49 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:49.254251 | orchestrator | 2025-05-25 04:03:49 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:49.255143 | orchestrator | 2025-05-25 04:03:49 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:49.255527 | orchestrator | 2025-05-25 04:03:49 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:03:49.255567 | orchestrator | 2025-05-25 04:03:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:52.286928 | orchestrator | 2025-05-25 04:03:52 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:52.287247 | orchestrator | 2025-05-25 04:03:52 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:52.287843 | orchestrator | 2025-05-25 04:03:52 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:52.289145 | orchestrator | 2025-05-25 04:03:52 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:52.289786 | orchestrator | 2025-05-25 04:03:52 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:03:52.289810 | orchestrator | 2025-05-25 04:03:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:55.325890 | orchestrator | 2025-05-25 04:03:55 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:55.326214 | orchestrator | 2025-05-25 04:03:55 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state STARTED 2025-05-25 04:03:55.327633 | orchestrator | 2025-05-25 04:03:55 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:55.329701 | orchestrator | 2025-05-25 04:03:55 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:55.329765 | orchestrator | 2025-05-25 04:03:55 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:03:55.329777 | orchestrator | 2025-05-25 04:03:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:03:58.349799 | orchestrator | 2025-05-25 04:03:58 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:03:58.349997 | orchestrator | 2025-05-25 04:03:58 | INFO  | Task b12905c3-76dd-447f-b4d2-764f064e14aa is in state SUCCESS 2025-05-25 04:03:58.350519 | orchestrator | 2025-05-25 04:03:58.350551 | orchestrator | 2025-05-25 04:03:58.350564 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:03:58.350576 | orchestrator | 2025-05-25 04:03:58.350587 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:03:58.350599 | orchestrator | Sunday 25 May 2025 04:02:38 +0000 (0:00:00.267) 0:00:00.267 ************ 2025-05-25 04:03:58.350610 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:03:58.350622 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:03:58.350633 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:03:58.350669 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:03:58.350680 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:03:58.350691 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:03:58.350702 | orchestrator | ok: [testbed-manager] 2025-05-25 04:03:58.350712 | orchestrator | 2025-05-25 04:03:58.350723 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:03:58.350734 | orchestrator | Sunday 25 May 2025 04:02:39 +0000 (0:00:00.771) 0:00:01.038 ************ 2025-05-25 04:03:58.350770 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-25 04:03:58.350782 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-25 04:03:58.350793 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-25 04:03:58.350804 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-25 04:03:58.350814 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-25 04:03:58.350825 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-25 04:03:58.350836 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-25 04:03:58.350847 | orchestrator | 2025-05-25 04:03:58.350857 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-25 04:03:58.350868 | orchestrator | 2025-05-25 04:03:58.350879 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-25 04:03:58.350889 | orchestrator | Sunday 25 May 2025 04:02:40 +0000 (0:00:00.998) 0:00:02.037 ************ 2025-05-25 04:03:58.350901 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-05-25 04:03:58.350913 | orchestrator | 2025-05-25 04:03:58.350924 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-25 04:03:58.350935 | orchestrator | Sunday 25 May 2025 04:02:42 +0000 (0:00:01.855) 0:00:03.892 ************ 2025-05-25 04:03:58.350945 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-05-25 04:03:58.350956 | orchestrator | 2025-05-25 04:03:58.350967 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-25 04:03:58.350986 | orchestrator | Sunday 25 May 2025 04:02:45 +0000 (0:00:03.386) 0:00:07.279 ************ 2025-05-25 04:03:58.351005 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-25 04:03:58.351025 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-25 04:03:58.351044 | orchestrator | 2025-05-25 04:03:58.351063 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-25 04:03:58.351081 | orchestrator | Sunday 25 May 2025 04:02:52 +0000 (0:00:06.461) 0:00:13.741 ************ 2025-05-25 04:03:58.351093 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-25 04:03:58.351103 | orchestrator | 2025-05-25 04:03:58.351114 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-25 04:03:58.351127 | orchestrator | Sunday 25 May 2025 04:02:55 +0000 (0:00:03.395) 0:00:17.136 ************ 2025-05-25 04:03:58.351139 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-25 04:03:58.351152 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-05-25 04:03:58.351164 | orchestrator | 2025-05-25 04:03:58.351323 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-25 04:03:58.351339 | orchestrator | Sunday 25 May 2025 04:02:59 +0000 (0:00:03.816) 0:00:20.953 ************ 2025-05-25 04:03:58.351351 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-25 04:03:58.351365 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-05-25 04:03:58.351384 | orchestrator | 2025-05-25 04:03:58.351403 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-25 04:03:58.351422 | orchestrator | Sunday 25 May 2025 04:03:05 +0000 (0:00:06.520) 0:00:27.473 ************ 2025-05-25 04:03:58.351467 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-05-25 04:03:58.351489 | orchestrator | 2025-05-25 04:03:58.351508 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:03:58.351528 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:03:58.351547 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:03:58.351566 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:03:58.351585 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:03:58.351602 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:03:58.351627 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:03:58.351639 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:03:58.351650 | orchestrator | 2025-05-25 04:03:58.351661 | orchestrator | 2025-05-25 04:03:58.351671 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:03:58.351682 | orchestrator | Sunday 25 May 2025 04:03:10 +0000 (0:00:04.560) 0:00:32.034 ************ 2025-05-25 04:03:58.351693 | orchestrator | =============================================================================== 2025-05-25 04:03:58.352030 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.52s 2025-05-25 04:03:58.352068 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.46s 2025-05-25 04:03:58.352080 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.56s 2025-05-25 04:03:58.352090 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.82s 2025-05-25 04:03:58.352101 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.40s 2025-05-25 04:03:58.352112 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.39s 2025-05-25 04:03:58.352122 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.86s 2025-05-25 04:03:58.352133 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.00s 2025-05-25 04:03:58.352143 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.77s 2025-05-25 04:03:58.352154 | orchestrator | 2025-05-25 04:03:58.352164 | orchestrator | 2025-05-25 04:03:58.352175 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-25 04:03:58.352186 | orchestrator | 2025-05-25 04:03:58.352196 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-25 04:03:58.352207 | orchestrator | Sunday 25 May 2025 04:02:32 +0000 (0:00:00.241) 0:00:00.241 ************ 2025-05-25 04:03:58.352218 | orchestrator | changed: [testbed-manager] 2025-05-25 04:03:58.352229 | orchestrator | 2025-05-25 04:03:58.352239 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-25 04:03:58.352250 | orchestrator | Sunday 25 May 2025 04:02:34 +0000 (0:00:01.957) 0:00:02.199 ************ 2025-05-25 04:03:58.352261 | orchestrator | changed: [testbed-manager] 2025-05-25 04:03:58.352271 | orchestrator | 2025-05-25 04:03:58.352282 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-25 04:03:58.352292 | orchestrator | Sunday 25 May 2025 04:02:35 +0000 (0:00:00.978) 0:00:03.177 ************ 2025-05-25 04:03:58.352311 | orchestrator | changed: [testbed-manager] 2025-05-25 04:03:58.352331 | orchestrator | 2025-05-25 04:03:58.352350 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-25 04:03:58.352368 | orchestrator | Sunday 25 May 2025 04:02:36 +0000 (0:00:01.028) 0:00:04.206 ************ 2025-05-25 04:03:58.352409 | orchestrator | changed: [testbed-manager] 2025-05-25 04:03:58.352433 | orchestrator | 2025-05-25 04:03:58.352453 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-25 04:03:58.352472 | orchestrator | Sunday 25 May 2025 04:02:37 +0000 (0:00:00.870) 0:00:05.076 ************ 2025-05-25 04:03:58.352491 | orchestrator | changed: [testbed-manager] 2025-05-25 04:03:58.352503 | orchestrator | 2025-05-25 04:03:58.352514 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-25 04:03:58.352524 | orchestrator | Sunday 25 May 2025 04:02:38 +0000 (0:00:00.953) 0:00:06.030 ************ 2025-05-25 04:03:58.352552 | orchestrator | changed: [testbed-manager] 2025-05-25 04:03:58.352563 | orchestrator | 2025-05-25 04:03:58.352574 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-25 04:03:58.352584 | orchestrator | Sunday 25 May 2025 04:02:39 +0000 (0:00:00.868) 0:00:06.898 ************ 2025-05-25 04:03:58.352595 | orchestrator | changed: [testbed-manager] 2025-05-25 04:03:58.352606 | orchestrator | 2025-05-25 04:03:58.352618 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-25 04:03:58.352630 | orchestrator | Sunday 25 May 2025 04:02:40 +0000 (0:00:01.117) 0:00:08.016 ************ 2025-05-25 04:03:58.352642 | orchestrator | changed: [testbed-manager] 2025-05-25 04:03:58.352654 | orchestrator | 2025-05-25 04:03:58.352666 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-25 04:03:58.352680 | orchestrator | Sunday 25 May 2025 04:02:41 +0000 (0:00:01.056) 0:00:09.072 ************ 2025-05-25 04:03:58.352693 | orchestrator | changed: [testbed-manager] 2025-05-25 04:03:58.352704 | orchestrator | 2025-05-25 04:03:58.352715 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-25 04:03:58.352726 | orchestrator | Sunday 25 May 2025 04:03:32 +0000 (0:00:50.947) 0:01:00.020 ************ 2025-05-25 04:03:58.352736 | orchestrator | skipping: [testbed-manager] 2025-05-25 04:03:58.352798 | orchestrator | 2025-05-25 04:03:58.352810 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-25 04:03:58.352820 | orchestrator | 2025-05-25 04:03:58.352831 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-25 04:03:58.352842 | orchestrator | Sunday 25 May 2025 04:03:32 +0000 (0:00:00.149) 0:01:00.169 ************ 2025-05-25 04:03:58.352852 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:03:58.352863 | orchestrator | 2025-05-25 04:03:58.352873 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-25 04:03:58.352884 | orchestrator | 2025-05-25 04:03:58.352894 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-25 04:03:58.352905 | orchestrator | Sunday 25 May 2025 04:03:34 +0000 (0:00:01.497) 0:01:01.667 ************ 2025-05-25 04:03:58.352915 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:03:58.352926 | orchestrator | 2025-05-25 04:03:58.352937 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-25 04:03:58.352947 | orchestrator | 2025-05-25 04:03:58.352958 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-25 04:03:58.352968 | orchestrator | Sunday 25 May 2025 04:03:45 +0000 (0:00:11.261) 0:01:12.928 ************ 2025-05-25 04:03:58.352979 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:03:58.352990 | orchestrator | 2025-05-25 04:03:58.353014 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:03:58.353026 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 04:03:58.353037 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:03:58.353049 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:03:58.353071 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:03:58.353082 | orchestrator | 2025-05-25 04:03:58.353092 | orchestrator | 2025-05-25 04:03:58.353103 | orchestrator | 2025-05-25 04:03:58.353114 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:03:58.353130 | orchestrator | Sunday 25 May 2025 04:03:56 +0000 (0:00:11.032) 0:01:23.961 ************ 2025-05-25 04:03:58.353152 | orchestrator | =============================================================================== 2025-05-25 04:03:58.353172 | orchestrator | Create admin user ------------------------------------------------------ 50.95s 2025-05-25 04:03:58.353184 | orchestrator | Restart ceph manager service ------------------------------------------- 23.79s 2025-05-25 04:03:58.353195 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.96s 2025-05-25 04:03:58.353208 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.12s 2025-05-25 04:03:58.353227 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.06s 2025-05-25 04:03:58.353244 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.03s 2025-05-25 04:03:58.353262 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.98s 2025-05-25 04:03:58.353279 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.95s 2025-05-25 04:03:58.353297 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.87s 2025-05-25 04:03:58.353315 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.87s 2025-05-25 04:03:58.353343 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2025-05-25 04:03:58.353362 | orchestrator | 2025-05-25 04:03:58 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:03:58.353378 | orchestrator | 2025-05-25 04:03:58 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:03:58.353390 | orchestrator | 2025-05-25 04:03:58 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:03:58.353401 | orchestrator | 2025-05-25 04:03:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:01.381316 | orchestrator | 2025-05-25 04:04:01 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:01.382071 | orchestrator | 2025-05-25 04:04:01 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:01.383506 | orchestrator | 2025-05-25 04:04:01 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:01.384070 | orchestrator | 2025-05-25 04:04:01 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:01.384116 | orchestrator | 2025-05-25 04:04:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:04.410335 | orchestrator | 2025-05-25 04:04:04 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:04.410437 | orchestrator | 2025-05-25 04:04:04 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:04.410451 | orchestrator | 2025-05-25 04:04:04 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:04.412656 | orchestrator | 2025-05-25 04:04:04 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:04.412733 | orchestrator | 2025-05-25 04:04:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:07.449400 | orchestrator | 2025-05-25 04:04:07 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:07.450519 | orchestrator | 2025-05-25 04:04:07 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:07.451845 | orchestrator | 2025-05-25 04:04:07 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:07.452281 | orchestrator | 2025-05-25 04:04:07 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:07.452316 | orchestrator | 2025-05-25 04:04:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:10.495399 | orchestrator | 2025-05-25 04:04:10 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:10.499817 | orchestrator | 2025-05-25 04:04:10 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:10.500901 | orchestrator | 2025-05-25 04:04:10 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:10.507082 | orchestrator | 2025-05-25 04:04:10 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:10.507143 | orchestrator | 2025-05-25 04:04:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:13.539012 | orchestrator | 2025-05-25 04:04:13 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:13.540213 | orchestrator | 2025-05-25 04:04:13 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:13.542269 | orchestrator | 2025-05-25 04:04:13 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:13.543225 | orchestrator | 2025-05-25 04:04:13 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:13.543261 | orchestrator | 2025-05-25 04:04:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:16.580640 | orchestrator | 2025-05-25 04:04:16 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:16.582229 | orchestrator | 2025-05-25 04:04:16 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:16.583094 | orchestrator | 2025-05-25 04:04:16 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:16.584980 | orchestrator | 2025-05-25 04:04:16 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:16.585014 | orchestrator | 2025-05-25 04:04:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:19.627642 | orchestrator | 2025-05-25 04:04:19 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:19.629778 | orchestrator | 2025-05-25 04:04:19 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:19.631527 | orchestrator | 2025-05-25 04:04:19 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:19.633056 | orchestrator | 2025-05-25 04:04:19 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:19.633983 | orchestrator | 2025-05-25 04:04:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:22.689799 | orchestrator | 2025-05-25 04:04:22 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:22.690273 | orchestrator | 2025-05-25 04:04:22 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:22.691182 | orchestrator | 2025-05-25 04:04:22 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:22.692278 | orchestrator | 2025-05-25 04:04:22 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:22.692315 | orchestrator | 2025-05-25 04:04:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:25.739597 | orchestrator | 2025-05-25 04:04:25 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:25.739811 | orchestrator | 2025-05-25 04:04:25 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:25.739842 | orchestrator | 2025-05-25 04:04:25 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:25.740347 | orchestrator | 2025-05-25 04:04:25 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:25.740371 | orchestrator | 2025-05-25 04:04:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:28.778490 | orchestrator | 2025-05-25 04:04:28 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:28.780285 | orchestrator | 2025-05-25 04:04:28 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:28.781589 | orchestrator | 2025-05-25 04:04:28 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:28.783322 | orchestrator | 2025-05-25 04:04:28 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:28.783384 | orchestrator | 2025-05-25 04:04:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:31.831511 | orchestrator | 2025-05-25 04:04:31 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:31.833059 | orchestrator | 2025-05-25 04:04:31 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:31.835361 | orchestrator | 2025-05-25 04:04:31 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:31.837088 | orchestrator | 2025-05-25 04:04:31 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:31.837129 | orchestrator | 2025-05-25 04:04:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:34.881724 | orchestrator | 2025-05-25 04:04:34 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:34.883458 | orchestrator | 2025-05-25 04:04:34 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:34.885263 | orchestrator | 2025-05-25 04:04:34 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:34.887617 | orchestrator | 2025-05-25 04:04:34 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:34.887698 | orchestrator | 2025-05-25 04:04:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:37.932728 | orchestrator | 2025-05-25 04:04:37 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:37.934679 | orchestrator | 2025-05-25 04:04:37 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:37.935519 | orchestrator | 2025-05-25 04:04:37 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:37.937581 | orchestrator | 2025-05-25 04:04:37 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:37.937624 | orchestrator | 2025-05-25 04:04:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:40.988206 | orchestrator | 2025-05-25 04:04:40 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:40.989192 | orchestrator | 2025-05-25 04:04:40 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:40.990298 | orchestrator | 2025-05-25 04:04:40 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:40.992344 | orchestrator | 2025-05-25 04:04:40 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:40.992416 | orchestrator | 2025-05-25 04:04:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:44.047179 | orchestrator | 2025-05-25 04:04:44 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:44.049032 | orchestrator | 2025-05-25 04:04:44 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:44.049114 | orchestrator | 2025-05-25 04:04:44 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:44.049126 | orchestrator | 2025-05-25 04:04:44 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:44.049136 | orchestrator | 2025-05-25 04:04:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:47.091339 | orchestrator | 2025-05-25 04:04:47 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:47.094089 | orchestrator | 2025-05-25 04:04:47 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:47.102794 | orchestrator | 2025-05-25 04:04:47 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:47.103988 | orchestrator | 2025-05-25 04:04:47 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:47.104017 | orchestrator | 2025-05-25 04:04:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:50.151303 | orchestrator | 2025-05-25 04:04:50 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:50.151888 | orchestrator | 2025-05-25 04:04:50 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:50.151919 | orchestrator | 2025-05-25 04:04:50 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:50.153016 | orchestrator | 2025-05-25 04:04:50 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:50.153039 | orchestrator | 2025-05-25 04:04:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:53.209666 | orchestrator | 2025-05-25 04:04:53 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:53.211422 | orchestrator | 2025-05-25 04:04:53 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:53.214976 | orchestrator | 2025-05-25 04:04:53 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:53.216857 | orchestrator | 2025-05-25 04:04:53 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:53.216893 | orchestrator | 2025-05-25 04:04:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:56.266314 | orchestrator | 2025-05-25 04:04:56 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:56.266450 | orchestrator | 2025-05-25 04:04:56 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:56.269849 | orchestrator | 2025-05-25 04:04:56 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:56.270625 | orchestrator | 2025-05-25 04:04:56 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:56.270665 | orchestrator | 2025-05-25 04:04:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:04:59.305279 | orchestrator | 2025-05-25 04:04:59 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:04:59.305397 | orchestrator | 2025-05-25 04:04:59 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:04:59.305844 | orchestrator | 2025-05-25 04:04:59 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:04:59.306513 | orchestrator | 2025-05-25 04:04:59 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:04:59.306539 | orchestrator | 2025-05-25 04:04:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:02.346782 | orchestrator | 2025-05-25 04:05:02 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:02.346865 | orchestrator | 2025-05-25 04:05:02 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:05:02.347378 | orchestrator | 2025-05-25 04:05:02 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:05:02.348121 | orchestrator | 2025-05-25 04:05:02 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:02.348150 | orchestrator | 2025-05-25 04:05:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:05.381631 | orchestrator | 2025-05-25 04:05:05 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:05.382105 | orchestrator | 2025-05-25 04:05:05 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:05:05.382950 | orchestrator | 2025-05-25 04:05:05 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:05:05.384263 | orchestrator | 2025-05-25 04:05:05 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:05.384293 | orchestrator | 2025-05-25 04:05:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:08.432036 | orchestrator | 2025-05-25 04:05:08 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:08.434086 | orchestrator | 2025-05-25 04:05:08 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:05:08.435397 | orchestrator | 2025-05-25 04:05:08 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:05:08.437156 | orchestrator | 2025-05-25 04:05:08 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:08.437195 | orchestrator | 2025-05-25 04:05:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:11.477996 | orchestrator | 2025-05-25 04:05:11 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:11.478209 | orchestrator | 2025-05-25 04:05:11 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:05:11.478638 | orchestrator | 2025-05-25 04:05:11 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:05:11.479194 | orchestrator | 2025-05-25 04:05:11 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:11.479224 | orchestrator | 2025-05-25 04:05:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:14.506151 | orchestrator | 2025-05-25 04:05:14 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:14.506267 | orchestrator | 2025-05-25 04:05:14 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:05:14.506550 | orchestrator | 2025-05-25 04:05:14 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:05:14.507640 | orchestrator | 2025-05-25 04:05:14 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:14.507675 | orchestrator | 2025-05-25 04:05:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:17.550642 | orchestrator | 2025-05-25 04:05:17 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:17.551786 | orchestrator | 2025-05-25 04:05:17 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:05:17.553611 | orchestrator | 2025-05-25 04:05:17 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:05:17.554925 | orchestrator | 2025-05-25 04:05:17 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:17.555041 | orchestrator | 2025-05-25 04:05:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:20.609117 | orchestrator | 2025-05-25 04:05:20 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:20.610442 | orchestrator | 2025-05-25 04:05:20 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:05:20.611887 | orchestrator | 2025-05-25 04:05:20 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:05:20.613572 | orchestrator | 2025-05-25 04:05:20 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:20.613617 | orchestrator | 2025-05-25 04:05:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:23.668614 | orchestrator | 2025-05-25 04:05:23 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:23.670170 | orchestrator | 2025-05-25 04:05:23 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:05:23.672288 | orchestrator | 2025-05-25 04:05:23 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:05:23.674095 | orchestrator | 2025-05-25 04:05:23 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:23.674228 | orchestrator | 2025-05-25 04:05:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:26.721048 | orchestrator | 2025-05-25 04:05:26 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:26.721623 | orchestrator | 2025-05-25 04:05:26 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:05:26.725509 | orchestrator | 2025-05-25 04:05:26 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:05:26.726149 | orchestrator | 2025-05-25 04:05:26 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:26.726373 | orchestrator | 2025-05-25 04:05:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:29.769810 | orchestrator | 2025-05-25 04:05:29 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:29.771594 | orchestrator | 2025-05-25 04:05:29 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:05:29.773536 | orchestrator | 2025-05-25 04:05:29 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:05:29.775347 | orchestrator | 2025-05-25 04:05:29 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:29.775386 | orchestrator | 2025-05-25 04:05:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:32.830280 | orchestrator | 2025-05-25 04:05:32 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:32.834085 | orchestrator | 2025-05-25 04:05:32 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:05:32.837313 | orchestrator | 2025-05-25 04:05:32 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:05:32.838766 | orchestrator | 2025-05-25 04:05:32 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:32.838835 | orchestrator | 2025-05-25 04:05:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:35.891831 | orchestrator | 2025-05-25 04:05:35 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:35.893490 | orchestrator | 2025-05-25 04:05:35 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state STARTED 2025-05-25 04:05:35.896491 | orchestrator | 2025-05-25 04:05:35 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:05:35.898967 | orchestrator | 2025-05-25 04:05:35 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:35.899051 | orchestrator | 2025-05-25 04:05:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:38.952636 | orchestrator | 2025-05-25 04:05:38 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:38.954333 | orchestrator | 2025-05-25 04:05:38 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:05:38.957351 | orchestrator | 2025-05-25 04:05:38 | INFO  | Task 8f07b207-3c6d-4e25-84ee-dee7e479a720 is in state SUCCESS 2025-05-25 04:05:38.957428 | orchestrator | 2025-05-25 04:05:38.959276 | orchestrator | 2025-05-25 04:05:38.959385 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:05:38.959403 | orchestrator | 2025-05-25 04:05:38.959416 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:05:38.959427 | orchestrator | Sunday 25 May 2025 04:02:38 +0000 (0:00:00.249) 0:00:00.249 ************ 2025-05-25 04:05:38.959439 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:05:38.959452 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:05:38.959463 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:05:38.959475 | orchestrator | 2025-05-25 04:05:38.959486 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:05:38.959497 | orchestrator | Sunday 25 May 2025 04:02:38 +0000 (0:00:00.289) 0:00:00.539 ************ 2025-05-25 04:05:38.959508 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-25 04:05:38.959520 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-25 04:05:38.959531 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-25 04:05:38.959541 | orchestrator | 2025-05-25 04:05:38.959553 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-25 04:05:38.959564 | orchestrator | 2025-05-25 04:05:38.959575 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-25 04:05:38.959586 | orchestrator | Sunday 25 May 2025 04:02:39 +0000 (0:00:00.340) 0:00:00.880 ************ 2025-05-25 04:05:38.959597 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:05:38.959608 | orchestrator | 2025-05-25 04:05:38.959619 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-25 04:05:38.959630 | orchestrator | Sunday 25 May 2025 04:02:39 +0000 (0:00:00.559) 0:00:01.440 ************ 2025-05-25 04:05:38.959657 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-25 04:05:38.959668 | orchestrator | 2025-05-25 04:05:38.959679 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-25 04:05:38.959690 | orchestrator | Sunday 25 May 2025 04:02:43 +0000 (0:00:03.303) 0:00:04.744 ************ 2025-05-25 04:05:38.959702 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-25 04:05:38.959713 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-25 04:05:38.959766 | orchestrator | 2025-05-25 04:05:38.959779 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-25 04:05:38.959790 | orchestrator | Sunday 25 May 2025 04:02:49 +0000 (0:00:06.466) 0:00:11.210 ************ 2025-05-25 04:05:38.959821 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-25 04:05:38.959833 | orchestrator | 2025-05-25 04:05:38.959844 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-25 04:05:38.959854 | orchestrator | Sunday 25 May 2025 04:02:53 +0000 (0:00:03.514) 0:00:14.724 ************ 2025-05-25 04:05:38.959866 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-25 04:05:38.959877 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-25 04:05:38.959889 | orchestrator | 2025-05-25 04:05:38.959900 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-25 04:05:38.959910 | orchestrator | Sunday 25 May 2025 04:02:56 +0000 (0:00:03.862) 0:00:18.587 ************ 2025-05-25 04:05:38.959922 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-25 04:05:38.959932 | orchestrator | 2025-05-25 04:05:38.959943 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-25 04:05:38.959954 | orchestrator | Sunday 25 May 2025 04:03:00 +0000 (0:00:03.384) 0:00:21.971 ************ 2025-05-25 04:05:38.959965 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-25 04:05:38.959976 | orchestrator | 2025-05-25 04:05:38.959986 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-25 04:05:38.959997 | orchestrator | Sunday 25 May 2025 04:03:04 +0000 (0:00:04.510) 0:00:26.482 ************ 2025-05-25 04:05:38.960034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 04:05:38.960057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 04:05:38.960079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 04:05:38.960092 | orchestrator | 2025-05-25 04:05:38.960103 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-25 04:05:38.960114 | orchestrator | Sunday 25 May 2025 04:03:09 +0000 (0:00:04.747) 0:00:31.230 ************ 2025-05-25 04:05:38.960125 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:05:38.960136 | orchestrator | 2025-05-25 04:05:38.960154 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-25 04:05:38.960165 | orchestrator | Sunday 25 May 2025 04:03:09 +0000 (0:00:00.470) 0:00:31.700 ************ 2025-05-25 04:05:38.960176 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:05:38.960187 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:05:38.960198 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:05:38.960209 | orchestrator | 2025-05-25 04:05:38.960220 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-25 04:05:38.960230 | orchestrator | Sunday 25 May 2025 04:03:13 +0000 (0:00:03.499) 0:00:35.200 ************ 2025-05-25 04:05:38.960241 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-25 04:05:38.960252 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-25 04:05:38.960263 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-25 04:05:38.960274 | orchestrator | 2025-05-25 04:05:38.960290 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-25 04:05:38.960300 | orchestrator | Sunday 25 May 2025 04:03:14 +0000 (0:00:01.437) 0:00:36.637 ************ 2025-05-25 04:05:38.960311 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-25 04:05:38.960322 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-25 04:05:38.960338 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-25 04:05:38.960349 | orchestrator | 2025-05-25 04:05:38.960360 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-25 04:05:38.960371 | orchestrator | Sunday 25 May 2025 04:03:16 +0000 (0:00:01.110) 0:00:37.748 ************ 2025-05-25 04:05:38.960381 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:05:38.960392 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:05:38.960403 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:05:38.960413 | orchestrator | 2025-05-25 04:05:38.960424 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-25 04:05:38.960435 | orchestrator | Sunday 25 May 2025 04:03:16 +0000 (0:00:00.701) 0:00:38.449 ************ 2025-05-25 04:05:38.960445 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:38.960456 | orchestrator | 2025-05-25 04:05:38.960466 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-25 04:05:38.960477 | orchestrator | Sunday 25 May 2025 04:03:16 +0000 (0:00:00.102) 0:00:38.552 ************ 2025-05-25 04:05:38.960488 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:38.960532 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:38.960545 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:38.960556 | orchestrator | 2025-05-25 04:05:38.960567 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-25 04:05:38.960578 | orchestrator | Sunday 25 May 2025 04:03:17 +0000 (0:00:00.262) 0:00:38.814 ************ 2025-05-25 04:05:38.960589 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:05:38.960600 | orchestrator | 2025-05-25 04:05:38.960610 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-25 04:05:38.960621 | orchestrator | Sunday 25 May 2025 04:03:17 +0000 (0:00:00.467) 0:00:39.281 ************ 2025-05-25 04:05:38.960639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 04:05:38.960666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 04:05:38.960679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 04:05:38.960692 | orchestrator | 2025-05-25 04:05:38.960703 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-25 04:05:38.960714 | orchestrator | Sunday 25 May 2025 04:03:23 +0000 (0:00:05.471) 0:00:44.753 ************ 2025-05-25 04:05:38.960762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-25 04:05:38.960784 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:38.960796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-25 04:05:38.960809 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:38.960828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-25 04:05:38.960847 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:38.960858 | orchestrator | 2025-05-25 04:05:38.960869 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-25 04:05:38.960880 | orchestrator | Sunday 25 May 2025 04:03:26 +0000 (0:00:03.903) 0:00:48.656 ************ 2025-05-25 04:05:38.960897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-25 04:05:38.960909 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:38.960927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-25 04:05:38.960946 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:38.960963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-25 04:05:38.960975 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:38.960986 | orchestrator | 2025-05-25 04:05:38.960996 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-25 04:05:38.961007 | orchestrator | Sunday 25 May 2025 04:03:30 +0000 (0:00:04.041) 0:00:52.698 ************ 2025-05-25 04:05:38.961018 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:38.961029 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:38.961040 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:38.961050 | orchestrator | 2025-05-25 04:05:38.961061 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-25 04:05:38.961072 | orchestrator | Sunday 25 May 2025 04:03:34 +0000 (0:00:03.889) 0:00:56.587 ************ 2025-05-25 04:05:38.961089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 04:05:38.961119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 04:05:38.961132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 04:05:38.961151 | orchestrator | 2025-05-25 04:05:38.961162 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-25 04:05:38.961173 | orchestrator | Sunday 25 May 2025 04:03:38 +0000 (0:00:03.865) 0:01:00.453 ************ 2025-05-25 04:05:38.961183 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:05:38.961194 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:05:38.961205 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:05:38.961216 | orchestrator | 2025-05-25 04:05:38.961226 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-25 04:05:38.961237 | orchestrator | Sunday 25 May 2025 04:03:45 +0000 (0:00:06.930) 0:01:07.384 ************ 2025-05-25 04:05:38.961248 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:38.961258 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:38.961269 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:38.961280 | orchestrator | 2025-05-25 04:05:38.961291 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-25 04:05:38.961521 | orchestrator | Sunday 25 May 2025 04:03:51 +0000 (0:00:05.708) 0:01:13.092 ************ 2025-05-25 04:05:38.961540 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:38.961552 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:38.961562 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:38.961573 | orchestrator | 2025-05-25 04:05:38.961584 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-25 04:05:38.961595 | orchestrator | Sunday 25 May 2025 04:03:55 +0000 (0:00:04.353) 0:01:17.446 ************ 2025-05-25 04:05:38.961606 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:38.961617 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:38.961627 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:38.961638 | orchestrator | 2025-05-25 04:05:38.961649 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-25 04:05:38.961659 | orchestrator | Sunday 25 May 2025 04:04:01 +0000 (0:00:06.120) 0:01:23.566 ************ 2025-05-25 04:05:38.961670 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:38.961681 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:38.961692 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:38.961702 | orchestrator | 2025-05-25 04:05:38.961713 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-25 04:05:38.961754 | orchestrator | Sunday 25 May 2025 04:04:06 +0000 (0:00:04.830) 0:01:28.397 ************ 2025-05-25 04:05:38.961773 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:38.961793 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:38.961805 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:38.961816 | orchestrator | 2025-05-25 04:05:38.961827 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-25 04:05:38.961837 | orchestrator | Sunday 25 May 2025 04:04:07 +0000 (0:00:00.470) 0:01:28.868 ************ 2025-05-25 04:05:38.961927 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-25 04:05:38.961943 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:38.961954 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-25 04:05:38.961965 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:38.961976 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-25 04:05:38.961987 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:38.961997 | orchestrator | 2025-05-25 04:05:38.962009 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-25 04:05:38.962077 | orchestrator | Sunday 25 May 2025 04:04:11 +0000 (0:00:04.340) 0:01:33.209 ************ 2025-05-25 04:05:38.962101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 04:05:38.962127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 04:05:38.962147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 04:05:38.962166 | orchestrator | 2025-05-25 04:05:38.962266 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-25 04:05:38.962282 | orchestrator | Sunday 25 May 2025 04:04:15 +0000 (0:00:04.149) 0:01:37.358 ************ 2025-05-25 04:05:38.962292 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:38.962303 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:38.962314 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:38.962325 | orchestrator | 2025-05-25 04:05:38.962335 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-25 04:05:38.962346 | orchestrator | Sunday 25 May 2025 04:04:15 +0000 (0:00:00.229) 0:01:37.588 ************ 2025-05-25 04:05:38.962356 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:05:38.962367 | orchestrator | 2025-05-25 04:05:38.962378 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-25 04:05:38.962388 | orchestrator | Sunday 25 May 2025 04:04:17 +0000 (0:00:01.744) 0:01:39.332 ************ 2025-05-25 04:05:38.962399 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:05:38.962410 | orchestrator | 2025-05-25 04:05:38.962420 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-25 04:05:38.962431 | orchestrator | Sunday 25 May 2025 04:04:19 +0000 (0:00:01.778) 0:01:41.111 ************ 2025-05-25 04:05:38.962442 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:05:38.962453 | orchestrator | 2025-05-25 04:05:38.962464 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-25 04:05:38.962474 | orchestrator | Sunday 25 May 2025 04:04:21 +0000 (0:00:01.692) 0:01:42.804 ************ 2025-05-25 04:05:38.962485 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:05:38.962495 | orchestrator | 2025-05-25 04:05:38.962506 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-25 04:05:38.962517 | orchestrator | Sunday 25 May 2025 04:04:49 +0000 (0:00:28.818) 0:02:11.622 ************ 2025-05-25 04:05:38.962528 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:05:38.962539 | orchestrator | 2025-05-25 04:05:38.962557 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-25 04:05:38.962568 | orchestrator | Sunday 25 May 2025 04:04:52 +0000 (0:00:02.273) 0:02:13.896 ************ 2025-05-25 04:05:38.962579 | orchestrator | 2025-05-25 04:05:38.962589 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-25 04:05:38.962600 | orchestrator | Sunday 25 May 2025 04:04:52 +0000 (0:00:00.062) 0:02:13.959 ************ 2025-05-25 04:05:38.962611 | orchestrator | 2025-05-25 04:05:38.962621 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-25 04:05:38.962632 | orchestrator | Sunday 25 May 2025 04:04:52 +0000 (0:00:00.062) 0:02:14.021 ************ 2025-05-25 04:05:38.962642 | orchestrator | 2025-05-25 04:05:38.962653 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-25 04:05:38.962672 | orchestrator | Sunday 25 May 2025 04:04:52 +0000 (0:00:00.063) 0:02:14.085 ************ 2025-05-25 04:05:38.962683 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:05:38.962693 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:05:38.962704 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:05:38.962715 | orchestrator | 2025-05-25 04:05:38.962798 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:05:38.962812 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-25 04:05:38.962824 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-25 04:05:38.962842 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-25 04:05:38.962853 | orchestrator | 2025-05-25 04:05:38.962863 | orchestrator | 2025-05-25 04:05:38.962874 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:05:38.962885 | orchestrator | Sunday 25 May 2025 04:05:36 +0000 (0:00:44.103) 0:02:58.188 ************ 2025-05-25 04:05:38.962896 | orchestrator | =============================================================================== 2025-05-25 04:05:38.962909 | orchestrator | glance : Restart glance-api container ---------------------------------- 44.10s 2025-05-25 04:05:38.962921 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.82s 2025-05-25 04:05:38.962933 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.93s 2025-05-25 04:05:38.962945 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.47s 2025-05-25 04:05:38.962957 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 6.12s 2025-05-25 04:05:38.962969 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.71s 2025-05-25 04:05:38.962982 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.47s 2025-05-25 04:05:38.962994 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.83s 2025-05-25 04:05:38.963006 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.75s 2025-05-25 04:05:38.963018 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.51s 2025-05-25 04:05:38.963030 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.35s 2025-05-25 04:05:38.963042 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.34s 2025-05-25 04:05:38.963055 | orchestrator | glance : Check glance containers ---------------------------------------- 4.15s 2025-05-25 04:05:38.963067 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.04s 2025-05-25 04:05:38.963079 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.90s 2025-05-25 04:05:38.963091 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.89s 2025-05-25 04:05:38.963103 | orchestrator | glance : Copying over config.json files for services -------------------- 3.87s 2025-05-25 04:05:38.963115 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.86s 2025-05-25 04:05:38.963126 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.51s 2025-05-25 04:05:38.963137 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.50s 2025-05-25 04:05:38.963146 | orchestrator | 2025-05-25 04:05:38 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:05:38.963156 | orchestrator | 2025-05-25 04:05:38 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:38.963166 | orchestrator | 2025-05-25 04:05:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:42.013563 | orchestrator | 2025-05-25 04:05:42 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:42.019479 | orchestrator | 2025-05-25 04:05:42 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:05:42.023384 | orchestrator | 2025-05-25 04:05:42 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:05:42.025501 | orchestrator | 2025-05-25 04:05:42 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:42.025563 | orchestrator | 2025-05-25 04:05:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:45.064449 | orchestrator | 2025-05-25 04:05:45 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:45.065550 | orchestrator | 2025-05-25 04:05:45 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:05:45.067166 | orchestrator | 2025-05-25 04:05:45 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:05:45.068557 | orchestrator | 2025-05-25 04:05:45 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:45.068638 | orchestrator | 2025-05-25 04:05:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:48.116904 | orchestrator | 2025-05-25 04:05:48 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:48.117015 | orchestrator | 2025-05-25 04:05:48 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:05:48.117295 | orchestrator | 2025-05-25 04:05:48 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:05:48.118132 | orchestrator | 2025-05-25 04:05:48 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:48.118162 | orchestrator | 2025-05-25 04:05:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:51.167633 | orchestrator | 2025-05-25 04:05:51 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:51.169581 | orchestrator | 2025-05-25 04:05:51 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:05:51.171923 | orchestrator | 2025-05-25 04:05:51 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state STARTED 2025-05-25 04:05:51.173768 | orchestrator | 2025-05-25 04:05:51 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:51.173984 | orchestrator | 2025-05-25 04:05:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:54.224027 | orchestrator | 2025-05-25 04:05:54 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:54.227419 | orchestrator | 2025-05-25 04:05:54 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:05:54.236401 | orchestrator | 2025-05-25 04:05:54.236476 | orchestrator | 2025-05-25 04:05:54.236491 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:05:54.236504 | orchestrator | 2025-05-25 04:05:54.236516 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:05:54.236528 | orchestrator | Sunday 25 May 2025 04:02:32 +0000 (0:00:00.271) 0:00:00.271 ************ 2025-05-25 04:05:54.236539 | orchestrator | ok: [testbed-manager] 2025-05-25 04:05:54.236552 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:05:54.236563 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:05:54.236574 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:05:54.236584 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:05:54.236595 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:05:54.236606 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:05:54.236618 | orchestrator | 2025-05-25 04:05:54.236783 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:05:54.237268 | orchestrator | Sunday 25 May 2025 04:02:33 +0000 (0:00:00.719) 0:00:00.990 ************ 2025-05-25 04:05:54.237289 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-25 04:05:54.237302 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-25 04:05:54.237314 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-25 04:05:54.237325 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-25 04:05:54.237335 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-25 04:05:54.237346 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-25 04:05:54.237357 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-25 04:05:54.237367 | orchestrator | 2025-05-25 04:05:54.237378 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-25 04:05:54.237445 | orchestrator | 2025-05-25 04:05:54.237458 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-25 04:05:54.237469 | orchestrator | Sunday 25 May 2025 04:02:33 +0000 (0:00:00.584) 0:00:01.574 ************ 2025-05-25 04:05:54.237480 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 04:05:54.237492 | orchestrator | 2025-05-25 04:05:54.237503 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-25 04:05:54.237514 | orchestrator | Sunday 25 May 2025 04:02:34 +0000 (0:00:01.198) 0:00:02.773 ************ 2025-05-25 04:05:54.237529 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-25 04:05:54.237544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.237571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.237583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.237611 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.237688 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.237701 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.237714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.237815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.237828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.237847 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.237859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.237890 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.237905 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.238326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.238343 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-25 04:05:54.238357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.238377 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.238389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.238441 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.238455 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.238466 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.238478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.238490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.238501 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.238518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.238637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.238677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.238690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.238702 | orchestrator | 2025-05-25 04:05:54.239024 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-25 04:05:54.239046 | orchestrator | Sunday 25 May 2025 04:02:38 +0000 (0:00:03.195) 0:00:05.968 ************ 2025-05-25 04:05:54.239058 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 04:05:54.239069 | orchestrator | 2025-05-25 04:05:54.239081 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-25 04:05:54.239092 | orchestrator | Sunday 25 May 2025 04:02:39 +0000 (0:00:01.300) 0:00:07.269 ************ 2025-05-25 04:05:54.239103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.239116 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-25 04:05:54.239128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.239157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.239273 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.239292 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.239304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.239316 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.239327 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.239339 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.239351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.239903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.239961 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.240177 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.240191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.240202 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.240214 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-25 04:05:54.240246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.240265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.240361 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.240379 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.240391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.240402 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.240413 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.240424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.240443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.240460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.240504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.240517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.240529 | orchestrator | 2025-05-25 04:05:54.240541 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-25 04:05:54.240552 | orchestrator | Sunday 25 May 2025 04:02:44 +0000 (0:00:05.432) 0:00:12.702 ************ 2025-05-25 04:05:54.240564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 04:05:54.240576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.240587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.240606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.240622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.240663 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 04:05:54.240676 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 04:05:54.240688 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.240700 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 04:05:54.240747 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.240767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 04:05:54.240788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.240834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.240848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.240859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 04:05:54.240871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.240893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.240905 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:54.240916 | orchestrator | skipping: [testbed-manager] 2025-05-25 04:05:54.240927 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:54.240939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.240950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.240966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.240981 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:54.241024 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 04:05:54.241039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.241054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.241066 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:05:54.241086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 04:05:54.241101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.241114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.241127 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:05:54.241146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 04:05:54.241160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.241204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.241219 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:05:54.241232 | orchestrator | 2025-05-25 04:05:54.241245 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-25 04:05:54.241257 | orchestrator | Sunday 25 May 2025 04:02:46 +0000 (0:00:01.526) 0:00:14.228 ************ 2025-05-25 04:05:54.241270 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 04:05:54.241291 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 04:05:54.241305 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.241323 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 04:05:54.241335 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.241375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 04:05:54.241388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.241407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.241419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.241430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.241441 | orchestrator | skipping: [testbed-manager] 2025-05-25 04:05:54.241453 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:54.241464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 04:05:54.241484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.241496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.241536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.241549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.241568 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:54.241579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 04:05:54.241591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.241602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.241613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.241630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 04:05:54.241642 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:54.241683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 04:05:54.241697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.241738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.241754 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:05:54.241765 | orchestrator | skipping: [testbed-node-4] => 2025-05-25 04:05:54 | INFO  | Task 74d32bef-f393-42b8-bfa3-4bb0a1100f52 is in state SUCCESS 2025-05-25 04:05:54.241776 | orchestrator | (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 04:05:54.241788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.241799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.241810 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:05:54.241828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 04:05:54.241840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.241883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 04:05:54.241904 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:05:54.241915 | orchestrator | 2025-05-25 04:05:54.241926 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-25 04:05:54.241937 | orchestrator | Sunday 25 May 2025 04:02:48 +0000 (0:00:01.996) 0:00:16.225 ************ 2025-05-25 04:05:54.241949 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-25 04:05:54.241960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.241972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.241984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.241995 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.242011 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.242102 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.242116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.242128 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.242140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.242151 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.242163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.242174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.242191 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.242239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.242252 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.242264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.242275 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-25 04:05:54.242288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.242299 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.242316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.242364 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.242378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.242389 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.242401 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.242412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.242423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.242440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.242459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.242470 | orchestrator | 2025-05-25 04:05:54.242481 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-25 04:05:54.242492 | orchestrator | Sunday 25 May 2025 04:02:54 +0000 (0:00:06.090) 0:00:22.315 ************ 2025-05-25 04:05:54.242532 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 04:05:54.242545 | orchestrator | 2025-05-25 04:05:54.242556 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-25 04:05:54.242567 | orchestrator | Sunday 25 May 2025 04:02:56 +0000 (0:00:01.481) 0:00:23.797 ************ 2025-05-25 04:05:54.242579 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079579, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.925531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.242591 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079579, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.925531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.242603 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079579, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.925531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.242614 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1079548, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.242626 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079579, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.925531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.242653 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079579, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.925531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.242697 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1079548, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.242710 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079579, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.925531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.242795 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1079548, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.242818 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1079518, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9115307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.242836 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079579, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.925531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.242852 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1079518, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9115307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.242879 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1079548, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.242924 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1079548, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.242937 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1079518, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9115307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.242947 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1079548, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.242957 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1079520, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9125307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.242966 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1079520, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9125307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.242985 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1079548, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.243000 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1079518, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9115307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243036 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1079520, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9125307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243049 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1079518, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9115307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243059 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1079539, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243069 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1079539, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243079 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1079539, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243095 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1079518, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9115307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243110 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1079525, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9135308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243146 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1079520, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9125307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243158 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1079525, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9135308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243168 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1079520, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9125307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243178 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1079518, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9115307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.243188 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1079525, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9135308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243204 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1079520, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9125307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243219 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1079537, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243254 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1079539, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243266 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1079539, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243276 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1079537, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243285 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1079539, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243295 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1079537, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243311 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1079550, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243326 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1079550, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243336 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1079525, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9135308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243372 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1079525, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9135308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243384 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1079537, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243394 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1079577, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9245307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243410 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1079525, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9135308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243420 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1079520, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9125307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.243435 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1079550, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243445 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1079577, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9245307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243481 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1079550, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243493 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1079537, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243503 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1079588, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9295309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243519 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1079537, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243529 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1079588, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9295309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243543 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1079552, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243553 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1079577, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9245307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243589 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1079577, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9245307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243600 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1079550, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243610 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1079552, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243632 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1079588, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9295309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243642 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1079550, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243657 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1079539, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.243667 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079522, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9125307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243703 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079522, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9125307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243774 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1079577, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9245307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243796 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1079588, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9295309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243807 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1079552, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243817 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1079552, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243832 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1079577, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9245307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243843 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1079588, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9295309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243884 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1079536, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243896 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1079536, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243912 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079522, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9125307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243922 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079522, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9125307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243932 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1079552, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243942 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079515, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9115307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243957 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1079588, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9295309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.243991 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079515, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9115307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244003 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1079525, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9135308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.244019 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1079536, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244029 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079522, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9125307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244040 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1079536, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244049 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1079543, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9165308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244064 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1079543, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9165308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244080 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1079552, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244091 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079515, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9115307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244107 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079515, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9115307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244117 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1079536, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244127 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1079587, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.928531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244137 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079522, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9125307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244152 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1079587, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.928531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244171 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1079537, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.244187 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1079543, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9165308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244197 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1079543, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9165308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244207 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079515, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9115307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244217 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1079533, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9145308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244227 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1079536, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244241 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1079587, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.928531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244258 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1079533, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9145308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244274 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1079580, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.925531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244284 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1079550, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.244294 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:54.244303 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1079587, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.928531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244312 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1079533, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9145308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244320 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079515, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9115307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244332 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1079543, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9165308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244345 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1079580, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.925531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244358 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:54.244367 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1079533, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9145308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244375 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1079543, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9165308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244383 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1079580, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.925531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244391 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1079587, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.928531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244399 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:05:54.244408 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1079580, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.925531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244422 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:54.244430 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1079577, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9245307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.244449 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1079587, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.928531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244458 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1079533, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9145308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244466 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1079533, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9145308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244474 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1079580, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.925531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244482 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:05:54.244490 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1079580, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.925531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 04:05:54.244498 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:05:54.244507 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1079588, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9295309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.244520 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1079552, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9175308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.244533 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079522, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9125307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.244541 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1079536, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9155307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.244550 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079515, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9115307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.244590 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1079543, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9165308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.244599 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1079587, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.928531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.244611 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1079533, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9145308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.244625 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1079580, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.925531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 04:05:54.244633 | orchestrator | 2025-05-25 04:05:54.244641 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-25 04:05:54.244654 | orchestrator | Sunday 25 May 2025 04:03:17 +0000 (0:00:21.572) 0:00:45.369 ************ 2025-05-25 04:05:54.244662 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 04:05:54.244670 | orchestrator | 2025-05-25 04:05:54.244678 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-25 04:05:54.244686 | orchestrator | Sunday 25 May 2025 04:03:18 +0000 (0:00:00.759) 0:00:46.129 ************ 2025-05-25 04:05:54.244694 | orchestrator | [WARNING]: Skipped 2025-05-25 04:05:54.244702 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 04:05:54.244710 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-25 04:05:54.244743 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 04:05:54.244755 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-25 04:05:54.244764 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 04:05:54.244772 | orchestrator | [WARNING]: Skipped 2025-05-25 04:05:54.244780 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 04:05:54.244788 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-25 04:05:54.244795 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 04:05:54.244803 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-25 04:05:54.244811 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 04:05:54.244819 | orchestrator | [WARNING]: Skipped 2025-05-25 04:05:54.244826 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 04:05:54.244835 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-25 04:05:54.244843 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 04:05:54.244850 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-25 04:05:54.244858 | orchestrator | [WARNING]: Skipped 2025-05-25 04:05:54.244866 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 04:05:54.244874 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-25 04:05:54.244882 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 04:05:54.244890 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-25 04:05:54.244898 | orchestrator | [WARNING]: Skipped 2025-05-25 04:05:54.244905 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 04:05:54.244913 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-25 04:05:54.244921 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 04:05:54.244929 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-25 04:05:54.244937 | orchestrator | [WARNING]: Skipped 2025-05-25 04:05:54.244944 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 04:05:54.244952 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-25 04:05:54.244966 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 04:05:54.244974 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-25 04:05:54.244982 | orchestrator | [WARNING]: Skipped 2025-05-25 04:05:54.244989 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 04:05:54.244997 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-25 04:05:54.245005 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 04:05:54.245012 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-25 04:05:54.245020 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-25 04:05:54.245028 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-25 04:05:54.245036 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-25 04:05:54.245044 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-25 04:05:54.245051 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-25 04:05:54.245059 | orchestrator | 2025-05-25 04:05:54.245067 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-25 04:05:54.245075 | orchestrator | Sunday 25 May 2025 04:03:21 +0000 (0:00:02.649) 0:00:48.778 ************ 2025-05-25 04:05:54.245082 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-25 04:05:54.245091 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-25 04:05:54.245099 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:54.245106 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:54.245118 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-25 04:05:54.245126 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:54.245134 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-25 04:05:54.245141 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:05:54.245149 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-25 04:05:54.245157 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:05:54.245164 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-25 04:05:54.245172 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:05:54.245180 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-25 04:05:54.245188 | orchestrator | 2025-05-25 04:05:54.245195 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-25 04:05:54.245208 | orchestrator | Sunday 25 May 2025 04:03:38 +0000 (0:00:17.675) 0:01:06.454 ************ 2025-05-25 04:05:54.245217 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-25 04:05:54.245225 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:54.245233 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-25 04:05:54.245240 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:54.245248 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-25 04:05:54.245256 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:54.245263 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-25 04:05:54.245284 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:05:54.245292 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-25 04:05:54.245308 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:05:54.245316 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-25 04:05:54.245323 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:05:54.245337 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-25 04:05:54.245345 | orchestrator | 2025-05-25 04:05:54.245352 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-25 04:05:54.245360 | orchestrator | Sunday 25 May 2025 04:03:42 +0000 (0:00:03.687) 0:01:10.141 ************ 2025-05-25 04:05:54.245368 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-25 04:05:54.245376 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-25 04:05:54.245384 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:54.245392 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-25 04:05:54.245400 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:54.245408 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:54.245416 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-25 04:05:54.245423 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-25 04:05:54.245431 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:05:54.245439 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-25 04:05:54.245447 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:05:54.245455 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-25 04:05:54.245463 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:05:54.245470 | orchestrator | 2025-05-25 04:05:54.245478 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-25 04:05:54.245486 | orchestrator | Sunday 25 May 2025 04:03:44 +0000 (0:00:02.252) 0:01:12.393 ************ 2025-05-25 04:05:54.245493 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 04:05:54.245501 | orchestrator | 2025-05-25 04:05:54.245509 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-25 04:05:54.245517 | orchestrator | Sunday 25 May 2025 04:03:45 +0000 (0:00:00.639) 0:01:13.033 ************ 2025-05-25 04:05:54.245524 | orchestrator | skipping: [testbed-manager] 2025-05-25 04:05:54.245532 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:54.245540 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:54.245547 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:54.245555 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:05:54.245563 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:05:54.245570 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:05:54.245578 | orchestrator | 2025-05-25 04:05:54.245586 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-25 04:05:54.245593 | orchestrator | Sunday 25 May 2025 04:03:45 +0000 (0:00:00.583) 0:01:13.617 ************ 2025-05-25 04:05:54.245601 | orchestrator | skipping: [testbed-manager] 2025-05-25 04:05:54.245609 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:05:54.245617 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:05:54.245624 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:05:54.245632 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:05:54.245644 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:05:54.245652 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:05:54.245660 | orchestrator | 2025-05-25 04:05:54.245668 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-25 04:05:54.245675 | orchestrator | Sunday 25 May 2025 04:03:48 +0000 (0:00:03.068) 0:01:16.685 ************ 2025-05-25 04:05:54.245683 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-25 04:05:54.245696 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-25 04:05:54.245704 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-25 04:05:54.245712 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-25 04:05:54.245743 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:54.245752 | orchestrator | skipping: [testbed-manager] 2025-05-25 04:05:54.245764 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:54.245772 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:54.245780 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-25 04:05:54.245788 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:05:54.245795 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-25 04:05:54.245803 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-25 04:05:54.245811 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:05:54.245819 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:05:54.245826 | orchestrator | 2025-05-25 04:05:54.245834 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-25 04:05:54.245842 | orchestrator | Sunday 25 May 2025 04:03:50 +0000 (0:00:01.676) 0:01:18.362 ************ 2025-05-25 04:05:54.245850 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-25 04:05:54.245858 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-25 04:05:54.245866 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:54.245874 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:54.245881 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-25 04:05:54.245889 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-25 04:05:54.245897 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:54.245905 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-25 04:05:54.245912 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:05:54.245920 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-25 04:05:54.245928 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:05:54.245936 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-25 04:05:54.245943 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:05:54.245951 | orchestrator | 2025-05-25 04:05:54.245959 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-25 04:05:54.245967 | orchestrator | Sunday 25 May 2025 04:03:52 +0000 (0:00:02.187) 0:01:20.550 ************ 2025-05-25 04:05:54.245975 | orchestrator | [WARNING]: Skipped 2025-05-25 04:05:54.245982 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-25 04:05:54.245990 | orchestrator | due to this access issue: 2025-05-25 04:05:54.245998 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-25 04:05:54.246005 | orchestrator | not a directory 2025-05-25 04:05:54.246013 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 04:05:54.246064 | orchestrator | 2025-05-25 04:05:54.246072 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-25 04:05:54.246080 | orchestrator | Sunday 25 May 2025 04:03:53 +0000 (0:00:01.180) 0:01:21.731 ************ 2025-05-25 04:05:54.246087 | orchestrator | skipping: [testbed-manager] 2025-05-25 04:05:54.246095 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:54.246109 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:54.246117 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:54.246125 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:05:54.246132 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:05:54.246140 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:05:54.246148 | orchestrator | 2025-05-25 04:05:54.246156 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-25 04:05:54.246164 | orchestrator | Sunday 25 May 2025 04:03:55 +0000 (0:00:01.613) 0:01:23.344 ************ 2025-05-25 04:05:54.246171 | orchestrator | skipping: [testbed-manager] 2025-05-25 04:05:54.246179 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:05:54.246187 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:05:54.246194 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:05:54.246202 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:05:54.246209 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:05:54.246217 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:05:54.246225 | orchestrator | 2025-05-25 04:05:54.246232 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-25 04:05:54.246240 | orchestrator | Sunday 25 May 2025 04:03:56 +0000 (0:00:01.016) 0:01:24.364 ************ 2025-05-25 04:05:54.246253 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-25 04:05:54.246269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.246278 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.246287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.246295 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.246308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.246318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.246331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.246340 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.246354 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-25 04:05:54.246364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.246372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.246389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.246397 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.246405 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.246417 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.246432 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 04:05:54.246441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.246449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.246462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.246471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.246480 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.246492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.246501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.246513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.246522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.246531 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.246545 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 04:05:54.246553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 04:05:54.246561 | orchestrator | 2025-05-25 04:05:54.246570 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-25 04:05:54.246578 | orchestrator | Sunday 25 May 2025 04:04:01 +0000 (0:00:04.937) 0:01:29.302 ************ 2025-05-25 04:05:54.246586 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-25 04:05:54.246593 | orchestrator | skipping: [testbed-manager] 2025-05-25 04:05:54.246601 | orchestrator | 2025-05-25 04:05:54.246610 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-25 04:05:54.246618 | orchestrator | Sunday 25 May 2025 04:04:03 +0000 (0:00:01.906) 0:01:31.208 ************ 2025-05-25 04:05:54.246625 | orchestrator | 2025-05-25 04:05:54.246633 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-25 04:05:54.246641 | orchestrator | Sunday 25 May 2025 04:04:03 +0000 (0:00:00.135) 0:01:31.344 ************ 2025-05-25 04:05:54.246649 | orchestrator | 2025-05-25 04:05:54.246656 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-25 04:05:54.246664 | orchestrator | Sunday 25 May 2025 04:04:03 +0000 (0:00:00.134) 0:01:31.478 ************ 2025-05-25 04:05:54.246672 | orchestrator | 2025-05-25 04:05:54.246679 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-25 04:05:54.246692 | orchestrator | Sunday 25 May 2025 04:04:04 +0000 (0:00:00.306) 0:01:31.785 ************ 2025-05-25 04:05:54.246700 | orchestrator | 2025-05-25 04:05:54.246707 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-25 04:05:54.246768 | orchestrator | Sunday 25 May 2025 04:04:04 +0000 (0:00:00.150) 0:01:31.935 ************ 2025-05-25 04:05:54.246785 | orchestrator | 2025-05-25 04:05:54.246799 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-25 04:05:54.246812 | orchestrator | Sunday 25 May 2025 04:04:04 +0000 (0:00:00.113) 0:01:32.049 ************ 2025-05-25 04:05:54.246821 | orchestrator | 2025-05-25 04:05:54.246828 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-25 04:05:54.246836 | orchestrator | Sunday 25 May 2025 04:04:04 +0000 (0:00:00.119) 0:01:32.168 ************ 2025-05-25 04:05:54.246844 | orchestrator | 2025-05-25 04:05:54.246852 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-25 04:05:54.246859 | orchestrator | Sunday 25 May 2025 04:04:04 +0000 (0:00:00.192) 0:01:32.361 ************ 2025-05-25 04:05:54.246867 | orchestrator | changed: [testbed-manager] 2025-05-25 04:05:54.246875 | orchestrator | 2025-05-25 04:05:54.246890 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-25 04:05:54.246898 | orchestrator | Sunday 25 May 2025 04:04:21 +0000 (0:00:17.324) 0:01:49.685 ************ 2025-05-25 04:05:54.246912 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:05:54.246920 | orchestrator | changed: [testbed-manager] 2025-05-25 04:05:54.246928 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:05:54.246936 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:05:54.246943 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:05:54.246951 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:05:54.246958 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:05:54.246966 | orchestrator | 2025-05-25 04:05:54.246974 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-25 04:05:54.246982 | orchestrator | Sunday 25 May 2025 04:04:38 +0000 (0:00:16.116) 0:02:05.802 ************ 2025-05-25 04:05:54.246989 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:05:54.246997 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:05:54.247005 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:05:54.247013 | orchestrator | 2025-05-25 04:05:54.247021 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-25 04:05:54.247028 | orchestrator | Sunday 25 May 2025 04:04:43 +0000 (0:00:05.763) 0:02:11.566 ************ 2025-05-25 04:05:54.247036 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:05:54.247043 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:05:54.247051 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:05:54.247059 | orchestrator | 2025-05-25 04:05:54.247067 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-25 04:05:54.247075 | orchestrator | Sunday 25 May 2025 04:04:54 +0000 (0:00:10.472) 0:02:22.038 ************ 2025-05-25 04:05:54.247082 | orchestrator | changed: [testbed-manager] 2025-05-25 04:05:54.247090 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:05:54.247098 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:05:54.247105 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:05:54.247113 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:05:54.247121 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:05:54.247128 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:05:54.247136 | orchestrator | 2025-05-25 04:05:54.247144 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-25 04:05:54.247152 | orchestrator | Sunday 25 May 2025 04:05:10 +0000 (0:00:16.535) 0:02:38.573 ************ 2025-05-25 04:05:54.247159 | orchestrator | changed: [testbed-manager] 2025-05-25 04:05:54.247167 | orchestrator | 2025-05-25 04:05:54.247175 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-25 04:05:54.247183 | orchestrator | Sunday 25 May 2025 04:05:24 +0000 (0:00:13.858) 0:02:52.432 ************ 2025-05-25 04:05:54.247191 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:05:54.247199 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:05:54.247206 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:05:54.247214 | orchestrator | 2025-05-25 04:05:54.247222 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-25 04:05:54.247229 | orchestrator | Sunday 25 May 2025 04:05:36 +0000 (0:00:11.585) 0:03:04.018 ************ 2025-05-25 04:05:54.247237 | orchestrator | changed: [testbed-manager] 2025-05-25 04:05:54.247245 | orchestrator | 2025-05-25 04:05:54.247253 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-25 04:05:54.247260 | orchestrator | Sunday 25 May 2025 04:05:41 +0000 (0:00:05.308) 0:03:09.327 ************ 2025-05-25 04:05:54.247268 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:05:54.247276 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:05:54.247284 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:05:54.247291 | orchestrator | 2025-05-25 04:05:54.247299 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:05:54.247307 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-25 04:05:54.247315 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-25 04:05:54.247327 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-25 04:05:54.247334 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-25 04:05:54.247341 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-25 04:05:54.247352 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-25 04:05:54.247359 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-25 04:05:54.247370 | orchestrator | 2025-05-25 04:05:54.247380 | orchestrator | 2025-05-25 04:05:54.247391 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:05:54.247401 | orchestrator | Sunday 25 May 2025 04:05:53 +0000 (0:00:11.718) 0:03:21.045 ************ 2025-05-25 04:05:54.247412 | orchestrator | =============================================================================== 2025-05-25 04:05:54.247422 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 21.57s 2025-05-25 04:05:54.247431 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.68s 2025-05-25 04:05:54.247438 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.32s 2025-05-25 04:05:54.247449 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.54s 2025-05-25 04:05:54.247456 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.12s 2025-05-25 04:05:54.247463 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 13.86s 2025-05-25 04:05:54.247470 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.72s 2025-05-25 04:05:54.247476 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.59s 2025-05-25 04:05:54.247483 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.47s 2025-05-25 04:05:54.247489 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.09s 2025-05-25 04:05:54.247496 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.76s 2025-05-25 04:05:54.247502 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.43s 2025-05-25 04:05:54.247509 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.31s 2025-05-25 04:05:54.247516 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.94s 2025-05-25 04:05:54.247522 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.69s 2025-05-25 04:05:54.247529 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.20s 2025-05-25 04:05:54.247535 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.07s 2025-05-25 04:05:54.247542 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.65s 2025-05-25 04:05:54.247548 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.25s 2025-05-25 04:05:54.247555 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.19s 2025-05-25 04:05:54.247562 | orchestrator | 2025-05-25 04:05:54 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:54.247569 | orchestrator | 2025-05-25 04:05:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:05:57.279812 | orchestrator | 2025-05-25 04:05:57 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:05:57.280664 | orchestrator | 2025-05-25 04:05:57 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:05:57.281388 | orchestrator | 2025-05-25 04:05:57 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:05:57.282890 | orchestrator | 2025-05-25 04:05:57 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:05:57.282916 | orchestrator | 2025-05-25 04:05:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:00.332314 | orchestrator | 2025-05-25 04:06:00 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:00.333433 | orchestrator | 2025-05-25 04:06:00 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:06:00.334805 | orchestrator | 2025-05-25 04:06:00 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:00.336317 | orchestrator | 2025-05-25 04:06:00 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:00.336369 | orchestrator | 2025-05-25 04:06:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:03.387355 | orchestrator | 2025-05-25 04:06:03 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:03.388361 | orchestrator | 2025-05-25 04:06:03 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:06:03.391186 | orchestrator | 2025-05-25 04:06:03 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:03.393391 | orchestrator | 2025-05-25 04:06:03 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:03.393451 | orchestrator | 2025-05-25 04:06:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:06.441352 | orchestrator | 2025-05-25 04:06:06 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:06.443396 | orchestrator | 2025-05-25 04:06:06 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:06:06.445175 | orchestrator | 2025-05-25 04:06:06 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:06.446999 | orchestrator | 2025-05-25 04:06:06 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:06.447034 | orchestrator | 2025-05-25 04:06:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:09.497418 | orchestrator | 2025-05-25 04:06:09 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:09.499932 | orchestrator | 2025-05-25 04:06:09 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:06:09.502073 | orchestrator | 2025-05-25 04:06:09 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:09.504503 | orchestrator | 2025-05-25 04:06:09 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:09.505052 | orchestrator | 2025-05-25 04:06:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:12.553417 | orchestrator | 2025-05-25 04:06:12 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:12.556951 | orchestrator | 2025-05-25 04:06:12 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:06:12.559777 | orchestrator | 2025-05-25 04:06:12 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:12.562479 | orchestrator | 2025-05-25 04:06:12 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:12.562558 | orchestrator | 2025-05-25 04:06:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:15.605629 | orchestrator | 2025-05-25 04:06:15 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:15.605812 | orchestrator | 2025-05-25 04:06:15 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:06:15.606211 | orchestrator | 2025-05-25 04:06:15 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:15.607355 | orchestrator | 2025-05-25 04:06:15 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:15.607380 | orchestrator | 2025-05-25 04:06:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:18.649651 | orchestrator | 2025-05-25 04:06:18 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:18.653811 | orchestrator | 2025-05-25 04:06:18 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:06:18.655391 | orchestrator | 2025-05-25 04:06:18 | INFO  | Task ceabf00f-7e05-42cb-9903-297b32cf08f7 is in state STARTED 2025-05-25 04:06:18.657068 | orchestrator | 2025-05-25 04:06:18 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:18.658684 | orchestrator | 2025-05-25 04:06:18 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:18.659301 | orchestrator | 2025-05-25 04:06:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:21.715231 | orchestrator | 2025-05-25 04:06:21 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:21.715684 | orchestrator | 2025-05-25 04:06:21 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:06:21.718150 | orchestrator | 2025-05-25 04:06:21 | INFO  | Task ceabf00f-7e05-42cb-9903-297b32cf08f7 is in state STARTED 2025-05-25 04:06:21.721407 | orchestrator | 2025-05-25 04:06:21 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:21.723336 | orchestrator | 2025-05-25 04:06:21 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:21.723390 | orchestrator | 2025-05-25 04:06:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:24.753472 | orchestrator | 2025-05-25 04:06:24 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:24.753544 | orchestrator | 2025-05-25 04:06:24 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:06:24.753550 | orchestrator | 2025-05-25 04:06:24 | INFO  | Task ceabf00f-7e05-42cb-9903-297b32cf08f7 is in state STARTED 2025-05-25 04:06:24.753555 | orchestrator | 2025-05-25 04:06:24 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:24.753573 | orchestrator | 2025-05-25 04:06:24 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:24.753577 | orchestrator | 2025-05-25 04:06:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:27.780152 | orchestrator | 2025-05-25 04:06:27 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:27.780387 | orchestrator | 2025-05-25 04:06:27 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:06:27.781023 | orchestrator | 2025-05-25 04:06:27 | INFO  | Task ceabf00f-7e05-42cb-9903-297b32cf08f7 is in state STARTED 2025-05-25 04:06:27.781776 | orchestrator | 2025-05-25 04:06:27 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:27.782410 | orchestrator | 2025-05-25 04:06:27 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:27.782485 | orchestrator | 2025-05-25 04:06:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:30.815450 | orchestrator | 2025-05-25 04:06:30 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:30.817031 | orchestrator | 2025-05-25 04:06:30 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:06:30.819567 | orchestrator | 2025-05-25 04:06:30 | INFO  | Task ceabf00f-7e05-42cb-9903-297b32cf08f7 is in state STARTED 2025-05-25 04:06:30.822177 | orchestrator | 2025-05-25 04:06:30 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:30.824198 | orchestrator | 2025-05-25 04:06:30 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:30.824511 | orchestrator | 2025-05-25 04:06:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:33.858507 | orchestrator | 2025-05-25 04:06:33 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:33.858626 | orchestrator | 2025-05-25 04:06:33 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:06:33.859023 | orchestrator | 2025-05-25 04:06:33 | INFO  | Task ceabf00f-7e05-42cb-9903-297b32cf08f7 is in state SUCCESS 2025-05-25 04:06:33.859664 | orchestrator | 2025-05-25 04:06:33 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:33.860524 | orchestrator | 2025-05-25 04:06:33 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:33.860808 | orchestrator | 2025-05-25 04:06:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:36.893479 | orchestrator | 2025-05-25 04:06:36 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:36.896213 | orchestrator | 2025-05-25 04:06:36 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:06:36.897079 | orchestrator | 2025-05-25 04:06:36 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:36.898134 | orchestrator | 2025-05-25 04:06:36 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:36.898162 | orchestrator | 2025-05-25 04:06:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:39.941661 | orchestrator | 2025-05-25 04:06:39 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:39.942195 | orchestrator | 2025-05-25 04:06:39 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:06:39.942688 | orchestrator | 2025-05-25 04:06:39 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:39.943892 | orchestrator | 2025-05-25 04:06:39 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:39.943920 | orchestrator | 2025-05-25 04:06:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:42.978168 | orchestrator | 2025-05-25 04:06:42 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:42.978242 | orchestrator | 2025-05-25 04:06:42 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:06:42.978722 | orchestrator | 2025-05-25 04:06:42 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:42.980130 | orchestrator | 2025-05-25 04:06:42 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:42.980202 | orchestrator | 2025-05-25 04:06:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:46.007947 | orchestrator | 2025-05-25 04:06:46 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:46.009089 | orchestrator | 2025-05-25 04:06:46 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:06:46.010207 | orchestrator | 2025-05-25 04:06:46 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:46.011409 | orchestrator | 2025-05-25 04:06:46 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:46.011462 | orchestrator | 2025-05-25 04:06:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:49.047208 | orchestrator | 2025-05-25 04:06:49 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:49.047559 | orchestrator | 2025-05-25 04:06:49 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state STARTED 2025-05-25 04:06:49.048438 | orchestrator | 2025-05-25 04:06:49 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:49.049417 | orchestrator | 2025-05-25 04:06:49 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:49.049465 | orchestrator | 2025-05-25 04:06:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:52.082447 | orchestrator | 2025-05-25 04:06:52 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:52.082566 | orchestrator | 2025-05-25 04:06:52 | INFO  | Task d6c24b89-ac59-4b65-9cc6-94f5016cf236 is in state SUCCESS 2025-05-25 04:06:52.084378 | orchestrator | 2025-05-25 04:06:52.084671 | orchestrator | None 2025-05-25 04:06:52.084770 | orchestrator | 2025-05-25 04:06:52.084786 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:06:52.084798 | orchestrator | 2025-05-25 04:06:52.084810 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:06:52.084822 | orchestrator | Sunday 25 May 2025 04:02:53 +0000 (0:00:00.261) 0:00:00.261 ************ 2025-05-25 04:06:52.084833 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:06:52.084846 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:06:52.084857 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:06:52.084868 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:06:52.084879 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:06:52.084890 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:06:52.084901 | orchestrator | 2025-05-25 04:06:52.084913 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:06:52.084924 | orchestrator | Sunday 25 May 2025 04:02:54 +0000 (0:00:00.718) 0:00:00.979 ************ 2025-05-25 04:06:52.084935 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-25 04:06:52.084946 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-25 04:06:52.084957 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-25 04:06:52.084968 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-25 04:06:52.084979 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-25 04:06:52.084990 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-25 04:06:52.085001 | orchestrator | 2025-05-25 04:06:52.085012 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-25 04:06:52.085023 | orchestrator | 2025-05-25 04:06:52.085034 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-25 04:06:52.085045 | orchestrator | Sunday 25 May 2025 04:02:54 +0000 (0:00:00.572) 0:00:01.552 ************ 2025-05-25 04:06:52.085057 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 04:06:52.085069 | orchestrator | 2025-05-25 04:06:52.085080 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-25 04:06:52.085091 | orchestrator | Sunday 25 May 2025 04:02:55 +0000 (0:00:01.215) 0:00:02.767 ************ 2025-05-25 04:06:52.085125 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-25 04:06:52.085137 | orchestrator | 2025-05-25 04:06:52.085148 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-25 04:06:52.085159 | orchestrator | Sunday 25 May 2025 04:02:59 +0000 (0:00:03.275) 0:00:06.043 ************ 2025-05-25 04:06:52.085170 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-25 04:06:52.085181 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-25 04:06:52.085192 | orchestrator | 2025-05-25 04:06:52.085203 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-25 04:06:52.085214 | orchestrator | Sunday 25 May 2025 04:03:05 +0000 (0:00:06.566) 0:00:12.609 ************ 2025-05-25 04:06:52.085225 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-25 04:06:52.085236 | orchestrator | 2025-05-25 04:06:52.085247 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-25 04:06:52.085259 | orchestrator | Sunday 25 May 2025 04:03:08 +0000 (0:00:02.941) 0:00:15.551 ************ 2025-05-25 04:06:52.085269 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-25 04:06:52.085280 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-25 04:06:52.085292 | orchestrator | 2025-05-25 04:06:52.085303 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-25 04:06:52.085315 | orchestrator | Sunday 25 May 2025 04:03:12 +0000 (0:00:03.847) 0:00:19.398 ************ 2025-05-25 04:06:52.085326 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-25 04:06:52.085337 | orchestrator | 2025-05-25 04:06:52.085363 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-25 04:06:52.085376 | orchestrator | Sunday 25 May 2025 04:03:15 +0000 (0:00:03.126) 0:00:22.525 ************ 2025-05-25 04:06:52.085387 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-25 04:06:52.085398 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-25 04:06:52.085409 | orchestrator | 2025-05-25 04:06:52.085420 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-25 04:06:52.085432 | orchestrator | Sunday 25 May 2025 04:03:22 +0000 (0:00:07.346) 0:00:29.871 ************ 2025-05-25 04:06:52.085461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 04:06:52.085478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 04:06:52.085498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 04:06:52.085510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.085528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.085540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.085561 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.085574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.085592 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.085604 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.085622 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.085635 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.085647 | orchestrator | 2025-05-25 04:06:52.085662 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-25 04:06:52.085672 | orchestrator | Sunday 25 May 2025 04:03:25 +0000 (0:00:02.426) 0:00:32.297 ************ 2025-05-25 04:06:52.085682 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:06:52.085716 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:06:52.085737 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:06:52.085747 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:06:52.085757 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:06:52.085767 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:06:52.085777 | orchestrator | 2025-05-25 04:06:52.085787 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-25 04:06:52.085796 | orchestrator | Sunday 25 May 2025 04:03:26 +0000 (0:00:00.857) 0:00:33.154 ************ 2025-05-25 04:06:52.085806 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:06:52.085815 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:06:52.085825 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:06:52.085835 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 04:06:52.085845 | orchestrator | 2025-05-25 04:06:52.085854 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-25 04:06:52.085864 | orchestrator | Sunday 25 May 2025 04:03:27 +0000 (0:00:00.837) 0:00:33.992 ************ 2025-05-25 04:06:52.085873 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-25 04:06:52.085883 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-25 04:06:52.085893 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-25 04:06:52.085903 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-25 04:06:52.085912 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-25 04:06:52.085922 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-25 04:06:52.085931 | orchestrator | 2025-05-25 04:06:52.085941 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-25 04:06:52.085951 | orchestrator | Sunday 25 May 2025 04:03:29 +0000 (0:00:02.068) 0:00:36.061 ************ 2025-05-25 04:06:52.085963 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-25 04:06:52.085979 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-25 04:06:52.085990 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-25 04:06:52.086061 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-25 04:06:52.086085 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-25 04:06:52.086280 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-25 04:06:52.086308 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-25 04:06:52.086828 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-25 04:06:52.086913 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-25 04:06:52.086927 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-25 04:06:52.086938 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-25 04:06:52.086956 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-25 04:06:52.086966 | orchestrator | 2025-05-25 04:06:52.086977 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-25 04:06:52.086987 | orchestrator | Sunday 25 May 2025 04:03:32 +0000 (0:00:03.674) 0:00:39.735 ************ 2025-05-25 04:06:52.086997 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-25 04:06:52.087014 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-25 04:06:52.087024 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-25 04:06:52.087034 | orchestrator | 2025-05-25 04:06:52.087043 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-25 04:06:52.087053 | orchestrator | Sunday 25 May 2025 04:03:34 +0000 (0:00:02.096) 0:00:41.832 ************ 2025-05-25 04:06:52.087062 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-25 04:06:52.087072 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-25 04:06:52.087081 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-25 04:06:52.087091 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-25 04:06:52.087100 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-25 04:06:52.087177 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-25 04:06:52.087192 | orchestrator | 2025-05-25 04:06:52.087202 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-25 04:06:52.087212 | orchestrator | Sunday 25 May 2025 04:03:37 +0000 (0:00:02.848) 0:00:44.680 ************ 2025-05-25 04:06:52.087221 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-25 04:06:52.087231 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-25 04:06:52.087241 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-25 04:06:52.087251 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-25 04:06:52.087260 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-25 04:06:52.087270 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-25 04:06:52.087279 | orchestrator | 2025-05-25 04:06:52.087289 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-25 04:06:52.087298 | orchestrator | Sunday 25 May 2025 04:03:38 +0000 (0:00:00.943) 0:00:45.624 ************ 2025-05-25 04:06:52.087308 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:06:52.087317 | orchestrator | 2025-05-25 04:06:52.087327 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-25 04:06:52.087336 | orchestrator | Sunday 25 May 2025 04:03:38 +0000 (0:00:00.147) 0:00:45.771 ************ 2025-05-25 04:06:52.087346 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:06:52.087355 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:06:52.087365 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:06:52.087374 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:06:52.087384 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:06:52.087393 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:06:52.087404 | orchestrator | 2025-05-25 04:06:52.087417 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-25 04:06:52.087428 | orchestrator | Sunday 25 May 2025 04:03:40 +0000 (0:00:01.379) 0:00:47.151 ************ 2025-05-25 04:06:52.087441 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 04:06:52.087453 | orchestrator | 2025-05-25 04:06:52.087466 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-25 04:06:52.087477 | orchestrator | Sunday 25 May 2025 04:03:41 +0000 (0:00:01.730) 0:00:48.882 ************ 2025-05-25 04:06:52.087489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 04:06:52.087520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 04:06:52.087559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 04:06:52.087573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.087585 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.087597 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.087619 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.087632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.087668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.087681 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.087720 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.087740 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.087750 | orchestrator | 2025-05-25 04:06:52.087759 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-25 04:06:52.087769 | orchestrator | Sunday 25 May 2025 04:03:45 +0000 (0:00:03.726) 0:00:52.608 ************ 2025-05-25 04:06:52.087785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-25 04:06:52.087800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.087810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-25 04:06:52.087820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.087836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-25 04:06:52.087851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.087861 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:06:52.087871 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:06:52.087881 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:06:52.087891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.087907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.087918 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:06:52.087928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.087944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.087954 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:06:52.087969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.087979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.087989 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:06:52.087999 | orchestrator | 2025-05-25 04:06:52.088008 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-25 04:06:52.088018 | orchestrator | Sunday 25 May 2025 04:03:47 +0000 (0:00:02.251) 0:00:54.859 ************ 2025-05-25 04:06:52.088034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-25 04:06:52.088044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.088059 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:06:52.088070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-25 04:06:52.088084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.088095 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:06:52.088105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-25 04:06:52.088123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.088133 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:06:52.088143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.088159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.088169 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:06:52.088179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.088193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.088204 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:06:52.088220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.088230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.088246 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:06:52.088256 | orchestrator | 2025-05-25 04:06:52.088266 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-25 04:06:52.088276 | orchestrator | Sunday 25 May 2025 04:03:50 +0000 (0:00:02.428) 0:00:57.288 ************ 2025-05-25 04:06:52.088286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 04:06:52.088300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 04:06:52.088311 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 04:06:52.088347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088357 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088382 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088397 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088424 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088434 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088444 | orchestrator | 2025-05-25 04:06:52.088454 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-25 04:06:52.088464 | orchestrator | Sunday 25 May 2025 04:03:53 +0000 (0:00:03.475) 0:01:00.764 ************ 2025-05-25 04:06:52.088474 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-25 04:06:52.088483 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:06:52.088493 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-25 04:06:52.088503 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:06:52.088526 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-25 04:06:52.088547 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:06:52.088557 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-25 04:06:52.088566 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-25 04:06:52.088581 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-25 04:06:52.088591 | orchestrator | 2025-05-25 04:06:52.088600 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-25 04:06:52.088610 | orchestrator | Sunday 25 May 2025 04:03:56 +0000 (0:00:02.275) 0:01:03.040 ************ 2025-05-25 04:06:52.088620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 04:06:52.088642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 04:06:52.088653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 04:06:52.088663 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088678 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088719 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088767 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088782 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088792 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.088807 | orchestrator | 2025-05-25 04:06:52.088817 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-25 04:06:52.088827 | orchestrator | Sunday 25 May 2025 04:04:06 +0000 (0:00:10.315) 0:01:13.355 ************ 2025-05-25 04:06:52.088842 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:06:52.088853 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:06:52.088862 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:06:52.088877 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:06:52.088894 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:06:52.089003 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:06:52.089019 | orchestrator | 2025-05-25 04:06:52.089029 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-25 04:06:52.089038 | orchestrator | Sunday 25 May 2025 04:04:08 +0000 (0:00:01.963) 0:01:15.318 ************ 2025-05-25 04:06:52.089049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-25 04:06:52.089059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.089070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-25 04:06:52.089087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-25 04:06:52.089117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.089128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.089138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.089148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.089158 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:06:52.089168 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:06:52.089178 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:06:52.089188 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:06:52.089202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.089232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.089242 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:06:52.089259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.089270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 04:06:52.089280 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:06:52.089290 | orchestrator | 2025-05-25 04:06:52.089299 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-25 04:06:52.089309 | orchestrator | Sunday 25 May 2025 04:04:09 +0000 (0:00:01.458) 0:01:16.777 ************ 2025-05-25 04:06:52.089319 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:06:52.089329 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:06:52.089338 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:06:52.089348 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:06:52.089357 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:06:52.089367 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:06:52.089377 | orchestrator | 2025-05-25 04:06:52.089387 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-25 04:06:52.089396 | orchestrator | Sunday 25 May 2025 04:04:10 +0000 (0:00:01.004) 0:01:17.781 ************ 2025-05-25 04:06:52.089411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 04:06:52.089427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 04:06:52.089443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 04:06:52.089454 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.089465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.089490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.089500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.089511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.089526 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.089537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.089547 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.089568 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-25 04:06:52.089586 | orchestrator | 2025-05-25 04:06:52.089779 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-25 04:06:52.089796 | orchestrator | Sunday 25 May 2025 04:04:13 +0000 (0:00:02.841) 0:01:20.623 ************ 2025-05-25 04:06:52.089807 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:06:52.089819 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:06:52.089830 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:06:52.089840 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:06:52.089852 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:06:52.089863 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:06:52.089874 | orchestrator | 2025-05-25 04:06:52.089886 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-25 04:06:52.089897 | orchestrator | Sunday 25 May 2025 04:04:14 +0000 (0:00:00.975) 0:01:21.599 ************ 2025-05-25 04:06:52.089909 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:06:52.089921 | orchestrator | 2025-05-25 04:06:52.089932 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-25 04:06:52.089944 | orchestrator | Sunday 25 May 2025 04:04:16 +0000 (0:00:01.808) 0:01:23.407 ************ 2025-05-25 04:06:52.089955 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:06:52.089966 | orchestrator | 2025-05-25 04:06:52.089978 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-25 04:06:52.089990 | orchestrator | Sunday 25 May 2025 04:04:18 +0000 (0:00:01.761) 0:01:25.169 ************ 2025-05-25 04:06:52.090001 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:06:52.090043 | orchestrator | 2025-05-25 04:06:52.090055 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-25 04:06:52.090065 | orchestrator | Sunday 25 May 2025 04:04:36 +0000 (0:00:17.977) 0:01:43.147 ************ 2025-05-25 04:06:52.090075 | orchestrator | 2025-05-25 04:06:52.090092 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-25 04:06:52.090100 | orchestrator | Sunday 25 May 2025 04:04:36 +0000 (0:00:00.063) 0:01:43.210 ************ 2025-05-25 04:06:52.090108 | orchestrator | 2025-05-25 04:06:52.090116 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-25 04:06:52.090124 | orchestrator | Sunday 25 May 2025 04:04:36 +0000 (0:00:00.062) 0:01:43.273 ************ 2025-05-25 04:06:52.090132 | orchestrator | 2025-05-25 04:06:52.090140 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-25 04:06:52.090148 | orchestrator | Sunday 25 May 2025 04:04:36 +0000 (0:00:00.074) 0:01:43.348 ************ 2025-05-25 04:06:52.090156 | orchestrator | 2025-05-25 04:06:52.090163 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-25 04:06:52.090177 | orchestrator | Sunday 25 May 2025 04:04:36 +0000 (0:00:00.062) 0:01:43.411 ************ 2025-05-25 04:06:52.090323 | orchestrator | 2025-05-25 04:06:52.090341 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-25 04:06:52.090366 | orchestrator | Sunday 25 May 2025 04:04:36 +0000 (0:00:00.058) 0:01:43.470 ************ 2025-05-25 04:06:52.090374 | orchestrator | 2025-05-25 04:06:52.090383 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-25 04:06:52.090391 | orchestrator | Sunday 25 May 2025 04:04:36 +0000 (0:00:00.063) 0:01:43.533 ************ 2025-05-25 04:06:52.090399 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:06:52.090407 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:06:52.090415 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:06:52.090423 | orchestrator | 2025-05-25 04:06:52.090431 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-25 04:06:52.090439 | orchestrator | Sunday 25 May 2025 04:04:59 +0000 (0:00:23.149) 0:02:06.683 ************ 2025-05-25 04:06:52.090447 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:06:52.090454 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:06:52.090462 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:06:52.090470 | orchestrator | 2025-05-25 04:06:52.090478 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-25 04:06:52.090486 | orchestrator | Sunday 25 May 2025 04:05:09 +0000 (0:00:09.823) 0:02:16.506 ************ 2025-05-25 04:06:52.090494 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:06:52.090502 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:06:52.090509 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:06:52.090517 | orchestrator | 2025-05-25 04:06:52.090525 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-25 04:06:52.090533 | orchestrator | Sunday 25 May 2025 04:06:38 +0000 (0:01:28.837) 0:03:45.344 ************ 2025-05-25 04:06:52.090541 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:06:52.090549 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:06:52.090556 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:06:52.090564 | orchestrator | 2025-05-25 04:06:52.090572 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-25 04:06:52.090580 | orchestrator | Sunday 25 May 2025 04:06:49 +0000 (0:00:11.175) 0:03:56.519 ************ 2025-05-25 04:06:52.090588 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:06:52.090596 | orchestrator | 2025-05-25 04:06:52.090604 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:06:52.090612 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-25 04:06:52.090621 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-25 04:06:52.090629 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-25 04:06:52.090643 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-25 04:06:52.090651 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-25 04:06:52.090659 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-25 04:06:52.090667 | orchestrator | 2025-05-25 04:06:52.090675 | orchestrator | 2025-05-25 04:06:52.090683 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:06:52.090708 | orchestrator | Sunday 25 May 2025 04:06:50 +0000 (0:00:00.561) 0:03:57.080 ************ 2025-05-25 04:06:52.090723 | orchestrator | =============================================================================== 2025-05-25 04:06:52.090735 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 88.84s 2025-05-25 04:06:52.090743 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 23.15s 2025-05-25 04:06:52.090756 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.98s 2025-05-25 04:06:52.090764 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.18s 2025-05-25 04:06:52.090772 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.32s 2025-05-25 04:06:52.090780 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.82s 2025-05-25 04:06:52.090788 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.35s 2025-05-25 04:06:52.090795 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.57s 2025-05-25 04:06:52.090810 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.85s 2025-05-25 04:06:52.090818 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.73s 2025-05-25 04:06:52.090826 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.67s 2025-05-25 04:06:52.090834 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.48s 2025-05-25 04:06:52.090842 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.28s 2025-05-25 04:06:52.090850 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.13s 2025-05-25 04:06:52.090857 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.94s 2025-05-25 04:06:52.090869 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.85s 2025-05-25 04:06:52.090877 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.84s 2025-05-25 04:06:52.090885 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 2.43s 2025-05-25 04:06:52.090893 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.43s 2025-05-25 04:06:52.090900 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.28s 2025-05-25 04:06:52.090908 | orchestrator | 2025-05-25 04:06:52 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:52.090916 | orchestrator | 2025-05-25 04:06:52 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:06:52.090924 | orchestrator | 2025-05-25 04:06:52 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:52.090932 | orchestrator | 2025-05-25 04:06:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:55.125224 | orchestrator | 2025-05-25 04:06:55 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:55.125325 | orchestrator | 2025-05-25 04:06:55 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:55.125810 | orchestrator | 2025-05-25 04:06:55 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:06:55.126609 | orchestrator | 2025-05-25 04:06:55 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:55.126653 | orchestrator | 2025-05-25 04:06:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:06:58.173487 | orchestrator | 2025-05-25 04:06:58 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:06:58.174384 | orchestrator | 2025-05-25 04:06:58 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:06:58.175651 | orchestrator | 2025-05-25 04:06:58 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:06:58.176630 | orchestrator | 2025-05-25 04:06:58 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:06:58.176749 | orchestrator | 2025-05-25 04:06:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:01.214566 | orchestrator | 2025-05-25 04:07:01 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:07:01.216780 | orchestrator | 2025-05-25 04:07:01 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:01.218600 | orchestrator | 2025-05-25 04:07:01 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:01.221563 | orchestrator | 2025-05-25 04:07:01 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:01.221639 | orchestrator | 2025-05-25 04:07:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:04.249054 | orchestrator | 2025-05-25 04:07:04 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:07:04.249179 | orchestrator | 2025-05-25 04:07:04 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:04.249326 | orchestrator | 2025-05-25 04:07:04 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:04.251107 | orchestrator | 2025-05-25 04:07:04 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:04.251141 | orchestrator | 2025-05-25 04:07:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:07.289589 | orchestrator | 2025-05-25 04:07:07 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:07:07.289863 | orchestrator | 2025-05-25 04:07:07 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:07.289907 | orchestrator | 2025-05-25 04:07:07 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:07.290905 | orchestrator | 2025-05-25 04:07:07 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:07.290946 | orchestrator | 2025-05-25 04:07:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:10.313893 | orchestrator | 2025-05-25 04:07:10 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:07:10.316032 | orchestrator | 2025-05-25 04:07:10 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:10.317455 | orchestrator | 2025-05-25 04:07:10 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:10.319462 | orchestrator | 2025-05-25 04:07:10 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:10.319536 | orchestrator | 2025-05-25 04:07:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:13.361729 | orchestrator | 2025-05-25 04:07:13 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:07:13.361950 | orchestrator | 2025-05-25 04:07:13 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:13.364571 | orchestrator | 2025-05-25 04:07:13 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:13.365482 | orchestrator | 2025-05-25 04:07:13 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:13.365517 | orchestrator | 2025-05-25 04:07:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:16.410215 | orchestrator | 2025-05-25 04:07:16 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:07:16.410469 | orchestrator | 2025-05-25 04:07:16 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:16.412528 | orchestrator | 2025-05-25 04:07:16 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:16.413541 | orchestrator | 2025-05-25 04:07:16 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:16.413713 | orchestrator | 2025-05-25 04:07:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:19.461371 | orchestrator | 2025-05-25 04:07:19 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:07:19.462471 | orchestrator | 2025-05-25 04:07:19 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:19.469510 | orchestrator | 2025-05-25 04:07:19 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:19.470655 | orchestrator | 2025-05-25 04:07:19 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:19.471715 | orchestrator | 2025-05-25 04:07:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:22.506180 | orchestrator | 2025-05-25 04:07:22 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:07:22.506298 | orchestrator | 2025-05-25 04:07:22 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:22.506879 | orchestrator | 2025-05-25 04:07:22 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:22.507512 | orchestrator | 2025-05-25 04:07:22 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:22.507534 | orchestrator | 2025-05-25 04:07:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:25.542384 | orchestrator | 2025-05-25 04:07:25 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:07:25.542464 | orchestrator | 2025-05-25 04:07:25 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:25.542669 | orchestrator | 2025-05-25 04:07:25 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:25.544316 | orchestrator | 2025-05-25 04:07:25 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:25.544326 | orchestrator | 2025-05-25 04:07:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:28.581621 | orchestrator | 2025-05-25 04:07:28 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:07:28.581939 | orchestrator | 2025-05-25 04:07:28 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:28.582568 | orchestrator | 2025-05-25 04:07:28 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:28.583381 | orchestrator | 2025-05-25 04:07:28 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:28.583465 | orchestrator | 2025-05-25 04:07:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:31.623069 | orchestrator | 2025-05-25 04:07:31 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:07:31.625705 | orchestrator | 2025-05-25 04:07:31 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:31.626273 | orchestrator | 2025-05-25 04:07:31 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:31.626911 | orchestrator | 2025-05-25 04:07:31 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:31.627059 | orchestrator | 2025-05-25 04:07:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:34.654365 | orchestrator | 2025-05-25 04:07:34 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:07:34.654801 | orchestrator | 2025-05-25 04:07:34 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:34.655956 | orchestrator | 2025-05-25 04:07:34 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:34.656624 | orchestrator | 2025-05-25 04:07:34 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:34.656661 | orchestrator | 2025-05-25 04:07:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:37.679927 | orchestrator | 2025-05-25 04:07:37 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:07:37.680032 | orchestrator | 2025-05-25 04:07:37 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:37.681428 | orchestrator | 2025-05-25 04:07:37 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:37.681821 | orchestrator | 2025-05-25 04:07:37 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:37.681920 | orchestrator | 2025-05-25 04:07:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:40.700897 | orchestrator | 2025-05-25 04:07:40 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state STARTED 2025-05-25 04:07:40.701086 | orchestrator | 2025-05-25 04:07:40 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:40.701617 | orchestrator | 2025-05-25 04:07:40 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:40.702307 | orchestrator | 2025-05-25 04:07:40 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:40.702331 | orchestrator | 2025-05-25 04:07:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:43.731726 | orchestrator | 2025-05-25 04:07:43 | INFO  | Task e53f6211-1d7d-4bb7-9417-1c740766f41d is in state SUCCESS 2025-05-25 04:07:43.732701 | orchestrator | 2025-05-25 04:07:43.732737 | orchestrator | 2025-05-25 04:07:43.732749 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:07:43.732760 | orchestrator | 2025-05-25 04:07:43.732770 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:07:43.732780 | orchestrator | Sunday 25 May 2025 04:05:57 +0000 (0:00:00.247) 0:00:00.247 ************ 2025-05-25 04:07:43.732791 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:07:43.732818 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:07:43.732828 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:07:43.732838 | orchestrator | 2025-05-25 04:07:43.732848 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:07:43.732857 | orchestrator | Sunday 25 May 2025 04:05:57 +0000 (0:00:00.273) 0:00:00.521 ************ 2025-05-25 04:07:43.732867 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-25 04:07:43.732878 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-25 04:07:43.732887 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-25 04:07:43.732897 | orchestrator | 2025-05-25 04:07:43.732907 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-25 04:07:43.732917 | orchestrator | 2025-05-25 04:07:43.732926 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-25 04:07:43.732936 | orchestrator | Sunday 25 May 2025 04:05:58 +0000 (0:00:00.382) 0:00:00.903 ************ 2025-05-25 04:07:43.732946 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:07:43.732956 | orchestrator | 2025-05-25 04:07:43.732966 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-25 04:07:43.732976 | orchestrator | Sunday 25 May 2025 04:05:58 +0000 (0:00:00.505) 0:00:01.408 ************ 2025-05-25 04:07:43.732986 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-25 04:07:43.732995 | orchestrator | 2025-05-25 04:07:43.733005 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-25 04:07:43.733037 | orchestrator | Sunday 25 May 2025 04:06:01 +0000 (0:00:03.093) 0:00:04.502 ************ 2025-05-25 04:07:43.733048 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-25 04:07:43.733057 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-25 04:07:43.733401 | orchestrator | 2025-05-25 04:07:43.733411 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-25 04:07:43.733421 | orchestrator | Sunday 25 May 2025 04:06:07 +0000 (0:00:06.062) 0:00:10.565 ************ 2025-05-25 04:07:43.733431 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-25 04:07:43.733441 | orchestrator | 2025-05-25 04:07:43.733450 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-25 04:07:43.733460 | orchestrator | Sunday 25 May 2025 04:06:10 +0000 (0:00:03.045) 0:00:13.610 ************ 2025-05-25 04:07:43.733470 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-25 04:07:43.733479 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-25 04:07:43.733489 | orchestrator | 2025-05-25 04:07:43.733498 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-25 04:07:43.733508 | orchestrator | Sunday 25 May 2025 04:06:14 +0000 (0:00:03.670) 0:00:17.280 ************ 2025-05-25 04:07:43.733518 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-25 04:07:43.733528 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-25 04:07:43.733537 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-25 04:07:43.733547 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-25 04:07:43.733557 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-25 04:07:43.733566 | orchestrator | 2025-05-25 04:07:43.733645 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-25 04:07:43.733658 | orchestrator | Sunday 25 May 2025 04:06:29 +0000 (0:00:15.193) 0:00:32.474 ************ 2025-05-25 04:07:43.733701 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-25 04:07:43.733712 | orchestrator | 2025-05-25 04:07:43.733721 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-25 04:07:43.733730 | orchestrator | Sunday 25 May 2025 04:06:33 +0000 (0:00:03.769) 0:00:36.244 ************ 2025-05-25 04:07:43.733743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 04:07:43.733779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 04:07:43.733802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 04:07:43.733813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.733825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.733835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.733854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.733871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.733887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.733897 | orchestrator | 2025-05-25 04:07:43.733907 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-25 04:07:43.733917 | orchestrator | Sunday 25 May 2025 04:06:35 +0000 (0:00:01.763) 0:00:38.007 ************ 2025-05-25 04:07:43.733927 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-25 04:07:43.733936 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-25 04:07:43.733946 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-25 04:07:43.733956 | orchestrator | 2025-05-25 04:07:43.733965 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-25 04:07:43.733975 | orchestrator | Sunday 25 May 2025 04:06:36 +0000 (0:00:00.937) 0:00:38.945 ************ 2025-05-25 04:07:43.733984 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:07:43.733994 | orchestrator | 2025-05-25 04:07:43.734004 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-25 04:07:43.734013 | orchestrator | Sunday 25 May 2025 04:06:36 +0000 (0:00:00.150) 0:00:39.095 ************ 2025-05-25 04:07:43.734181 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:07:43.734193 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:07:43.734203 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:07:43.734212 | orchestrator | 2025-05-25 04:07:43.734222 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-25 04:07:43.734232 | orchestrator | Sunday 25 May 2025 04:06:36 +0000 (0:00:00.450) 0:00:39.545 ************ 2025-05-25 04:07:43.734241 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:07:43.734251 | orchestrator | 2025-05-25 04:07:43.734260 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-25 04:07:43.734270 | orchestrator | Sunday 25 May 2025 04:06:37 +0000 (0:00:00.914) 0:00:40.459 ************ 2025-05-25 04:07:43.734280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 04:07:43.734307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 04:07:43.734327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 04:07:43.734337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.734347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.734358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.734368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.734395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.734406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.734416 | orchestrator | 2025-05-25 04:07:43.734426 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-25 04:07:43.734436 | orchestrator | Sunday 25 May 2025 04:06:41 +0000 (0:00:03.569) 0:00:44.029 ************ 2025-05-25 04:07:43.734446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-25 04:07:43.734456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.734467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.734478 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:07:43.734500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-25 04:07:43.734516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-25 04:07:43.734526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.734536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.734546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.734556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.734571 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:07:43.734581 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:07:43.734591 | orchestrator | 2025-05-25 04:07:43.734602 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-25 04:07:43.734617 | orchestrator | Sunday 25 May 2025 04:06:42 +0000 (0:00:01.216) 0:00:45.246 ************ 2025-05-25 04:07:43.734649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-25 04:07:43.734693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.734707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.734717 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:07:43.734728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-25 04:07:43.734751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.734762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.734772 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:07:43.734794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-25 04:07:43.734805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.734818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.734829 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:07:43.734840 | orchestrator | 2025-05-25 04:07:43.734851 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-25 04:07:43.734862 | orchestrator | Sunday 25 May 2025 04:06:43 +0000 (0:00:00.967) 0:00:46.214 ************ 2025-05-25 04:07:43.734873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 04:07:43.734896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 04:07:43.734913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 04:07:43.734926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.734937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.734948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.734964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.734981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.734998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.735009 | orchestrator | 2025-05-25 04:07:43.735020 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-25 04:07:43.735031 | orchestrator | Sunday 25 May 2025 04:06:47 +0000 (0:00:03.456) 0:00:49.670 ************ 2025-05-25 04:07:43.735043 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:07:43.735054 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:07:43.735065 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:07:43.735077 | orchestrator | 2025-05-25 04:07:43.735087 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-25 04:07:43.735096 | orchestrator | Sunday 25 May 2025 04:06:49 +0000 (0:00:02.364) 0:00:52.034 ************ 2025-05-25 04:07:43.735106 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 04:07:43.735115 | orchestrator | 2025-05-25 04:07:43.735125 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-25 04:07:43.735134 | orchestrator | Sunday 25 May 2025 04:06:50 +0000 (0:00:00.873) 0:00:52.908 ************ 2025-05-25 04:07:43.735144 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:07:43.735153 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:07:43.735163 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:07:43.735172 | orchestrator | 2025-05-25 04:07:43.735182 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-25 04:07:43.735191 | orchestrator | Sunday 25 May 2025 04:06:50 +0000 (0:00:00.493) 0:00:53.401 ************ 2025-05-25 04:07:43.735201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 04:07:43.735217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 04:07:43.735233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 04:07:43.735248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.735258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.735275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.735285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.735295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.735305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.735315 | orchestrator | 2025-05-25 04:07:43.735325 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-25 04:07:43.735335 | orchestrator | Sunday 25 May 2025 04:07:00 +0000 (0:00:09.745) 0:01:03.146 ************ 2025-05-25 04:07:43.735356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-25 04:07:43.735367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.735384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.735394 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:07:43.735404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-25 04:07:43.735414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.735430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.735444 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:07:43.735455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-25 04:07:43.735471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.735481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:07:43.735491 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:07:43.735501 | orchestrator | 2025-05-25 04:07:43.735510 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-25 04:07:43.735520 | orchestrator | Sunday 25 May 2025 04:07:01 +0000 (0:00:00.614) 0:01:03.761 ************ 2025-05-25 04:07:43.735530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 04:07:43.735551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 04:07:43.735562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 04:07:43.735577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.735588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.735597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.735607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.735645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.735655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:07:43.735740 | orchestrator | 2025-05-25 04:07:43.735760 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-25 04:07:43.735778 | orchestrator | Sunday 25 May 2025 04:07:03 +0000 (0:00:02.369) 0:01:06.130 ************ 2025-05-25 04:07:43.735788 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:07:43.735798 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:07:43.735807 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:07:43.735817 | orchestrator | 2025-05-25 04:07:43.735826 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-25 04:07:43.735836 | orchestrator | Sunday 25 May 2025 04:07:03 +0000 (0:00:00.264) 0:01:06.394 ************ 2025-05-25 04:07:43.735845 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:07:43.735854 | orchestrator | 2025-05-25 04:07:43.735864 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-25 04:07:43.735873 | orchestrator | Sunday 25 May 2025 04:07:05 +0000 (0:00:02.065) 0:01:08.459 ************ 2025-05-25 04:07:43.735882 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:07:43.735892 | orchestrator | 2025-05-25 04:07:43.735901 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-25 04:07:43.735910 | orchestrator | Sunday 25 May 2025 04:07:07 +0000 (0:00:01.971) 0:01:10.431 ************ 2025-05-25 04:07:43.735920 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:07:43.735929 | orchestrator | 2025-05-25 04:07:43.735939 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-25 04:07:43.735948 | orchestrator | Sunday 25 May 2025 04:07:19 +0000 (0:00:11.255) 0:01:21.687 ************ 2025-05-25 04:07:43.735957 | orchestrator | 2025-05-25 04:07:43.735967 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-25 04:07:43.735976 | orchestrator | Sunday 25 May 2025 04:07:19 +0000 (0:00:00.132) 0:01:21.820 ************ 2025-05-25 04:07:43.735986 | orchestrator | 2025-05-25 04:07:43.735995 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-25 04:07:43.736004 | orchestrator | Sunday 25 May 2025 04:07:19 +0000 (0:00:00.145) 0:01:21.965 ************ 2025-05-25 04:07:43.736014 | orchestrator | 2025-05-25 04:07:43.736023 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-25 04:07:43.736032 | orchestrator | Sunday 25 May 2025 04:07:19 +0000 (0:00:00.136) 0:01:22.102 ************ 2025-05-25 04:07:43.736042 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:07:43.736051 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:07:43.736061 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:07:43.736070 | orchestrator | 2025-05-25 04:07:43.736079 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-25 04:07:43.736089 | orchestrator | Sunday 25 May 2025 04:07:27 +0000 (0:00:07.675) 0:01:29.777 ************ 2025-05-25 04:07:43.736098 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:07:43.736107 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:07:43.736117 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:07:43.736126 | orchestrator | 2025-05-25 04:07:43.736135 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-25 04:07:43.736145 | orchestrator | Sunday 25 May 2025 04:07:33 +0000 (0:00:06.755) 0:01:36.532 ************ 2025-05-25 04:07:43.736154 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:07:43.736163 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:07:43.736173 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:07:43.736182 | orchestrator | 2025-05-25 04:07:43.736191 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:07:43.736202 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-25 04:07:43.736219 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 04:07:43.736229 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 04:07:43.736238 | orchestrator | 2025-05-25 04:07:43.736248 | orchestrator | 2025-05-25 04:07:43.736257 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:07:43.736267 | orchestrator | Sunday 25 May 2025 04:07:40 +0000 (0:00:06.870) 0:01:43.403 ************ 2025-05-25 04:07:43.736276 | orchestrator | =============================================================================== 2025-05-25 04:07:43.736286 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.19s 2025-05-25 04:07:43.736301 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.26s 2025-05-25 04:07:43.736311 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.75s 2025-05-25 04:07:43.736320 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.68s 2025-05-25 04:07:43.736330 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.87s 2025-05-25 04:07:43.736345 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.76s 2025-05-25 04:07:43.736355 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.06s 2025-05-25 04:07:43.736365 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.77s 2025-05-25 04:07:43.736374 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.67s 2025-05-25 04:07:43.736384 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.57s 2025-05-25 04:07:43.736393 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.46s 2025-05-25 04:07:43.736403 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.09s 2025-05-25 04:07:43.736412 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.05s 2025-05-25 04:07:43.736422 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.37s 2025-05-25 04:07:43.736431 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.36s 2025-05-25 04:07:43.736441 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.07s 2025-05-25 04:07:43.736450 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 1.97s 2025-05-25 04:07:43.736460 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.76s 2025-05-25 04:07:43.736469 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.22s 2025-05-25 04:07:43.736478 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 0.97s 2025-05-25 04:07:43.736488 | orchestrator | 2025-05-25 04:07:43 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:07:43.736498 | orchestrator | 2025-05-25 04:07:43 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:43.736508 | orchestrator | 2025-05-25 04:07:43 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:43.736517 | orchestrator | 2025-05-25 04:07:43 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:43.736527 | orchestrator | 2025-05-25 04:07:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:46.765180 | orchestrator | 2025-05-25 04:07:46 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:07:46.765405 | orchestrator | 2025-05-25 04:07:46 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:46.769057 | orchestrator | 2025-05-25 04:07:46 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:46.769632 | orchestrator | 2025-05-25 04:07:46 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:46.769659 | orchestrator | 2025-05-25 04:07:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:49.801853 | orchestrator | 2025-05-25 04:07:49 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:07:49.801952 | orchestrator | 2025-05-25 04:07:49 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:49.802132 | orchestrator | 2025-05-25 04:07:49 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:49.803763 | orchestrator | 2025-05-25 04:07:49 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:49.803787 | orchestrator | 2025-05-25 04:07:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:52.829769 | orchestrator | 2025-05-25 04:07:52 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:07:52.833201 | orchestrator | 2025-05-25 04:07:52 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:52.834733 | orchestrator | 2025-05-25 04:07:52 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:52.835533 | orchestrator | 2025-05-25 04:07:52 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:52.835613 | orchestrator | 2025-05-25 04:07:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:55.867791 | orchestrator | 2025-05-25 04:07:55 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:07:55.869001 | orchestrator | 2025-05-25 04:07:55 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:55.869407 | orchestrator | 2025-05-25 04:07:55 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:55.869880 | orchestrator | 2025-05-25 04:07:55 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:55.869908 | orchestrator | 2025-05-25 04:07:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:07:58.898233 | orchestrator | 2025-05-25 04:07:58 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:07:58.898837 | orchestrator | 2025-05-25 04:07:58 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:07:58.899411 | orchestrator | 2025-05-25 04:07:58 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:07:58.900188 | orchestrator | 2025-05-25 04:07:58 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:07:58.900242 | orchestrator | 2025-05-25 04:07:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:01.968069 | orchestrator | 2025-05-25 04:08:01 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:08:01.968172 | orchestrator | 2025-05-25 04:08:01 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:01.968188 | orchestrator | 2025-05-25 04:08:01 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:01.968200 | orchestrator | 2025-05-25 04:08:01 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:01.968211 | orchestrator | 2025-05-25 04:08:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:04.993220 | orchestrator | 2025-05-25 04:08:04 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:08:04.994108 | orchestrator | 2025-05-25 04:08:04 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:04.995256 | orchestrator | 2025-05-25 04:08:04 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:04.997363 | orchestrator | 2025-05-25 04:08:04 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:04.997578 | orchestrator | 2025-05-25 04:08:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:08.043155 | orchestrator | 2025-05-25 04:08:08 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:08:08.046115 | orchestrator | 2025-05-25 04:08:08 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:08.049177 | orchestrator | 2025-05-25 04:08:08 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:08.049547 | orchestrator | 2025-05-25 04:08:08 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:08.049735 | orchestrator | 2025-05-25 04:08:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:11.080997 | orchestrator | 2025-05-25 04:08:11 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:08:11.081089 | orchestrator | 2025-05-25 04:08:11 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:11.081104 | orchestrator | 2025-05-25 04:08:11 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:11.081236 | orchestrator | 2025-05-25 04:08:11 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:11.081256 | orchestrator | 2025-05-25 04:08:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:14.108727 | orchestrator | 2025-05-25 04:08:14 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:08:14.108809 | orchestrator | 2025-05-25 04:08:14 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:14.109940 | orchestrator | 2025-05-25 04:08:14 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:14.110487 | orchestrator | 2025-05-25 04:08:14 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:14.110512 | orchestrator | 2025-05-25 04:08:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:17.157596 | orchestrator | 2025-05-25 04:08:17 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:08:17.159819 | orchestrator | 2025-05-25 04:08:17 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:17.161561 | orchestrator | 2025-05-25 04:08:17 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:17.162579 | orchestrator | 2025-05-25 04:08:17 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:17.162838 | orchestrator | 2025-05-25 04:08:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:20.191785 | orchestrator | 2025-05-25 04:08:20 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:08:20.192034 | orchestrator | 2025-05-25 04:08:20 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:20.192543 | orchestrator | 2025-05-25 04:08:20 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:20.193138 | orchestrator | 2025-05-25 04:08:20 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:20.193313 | orchestrator | 2025-05-25 04:08:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:23.239826 | orchestrator | 2025-05-25 04:08:23 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:08:23.240439 | orchestrator | 2025-05-25 04:08:23 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:23.241673 | orchestrator | 2025-05-25 04:08:23 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:23.242461 | orchestrator | 2025-05-25 04:08:23 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:23.242502 | orchestrator | 2025-05-25 04:08:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:26.291102 | orchestrator | 2025-05-25 04:08:26 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:08:26.291225 | orchestrator | 2025-05-25 04:08:26 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:26.291481 | orchestrator | 2025-05-25 04:08:26 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:26.292511 | orchestrator | 2025-05-25 04:08:26 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:26.292563 | orchestrator | 2025-05-25 04:08:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:29.338383 | orchestrator | 2025-05-25 04:08:29 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:08:29.338492 | orchestrator | 2025-05-25 04:08:29 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:29.339264 | orchestrator | 2025-05-25 04:08:29 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:29.341138 | orchestrator | 2025-05-25 04:08:29 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:29.341187 | orchestrator | 2025-05-25 04:08:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:32.393779 | orchestrator | 2025-05-25 04:08:32 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:08:32.393884 | orchestrator | 2025-05-25 04:08:32 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:32.393900 | orchestrator | 2025-05-25 04:08:32 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:32.393912 | orchestrator | 2025-05-25 04:08:32 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:32.393924 | orchestrator | 2025-05-25 04:08:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:35.443524 | orchestrator | 2025-05-25 04:08:35 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:08:35.445219 | orchestrator | 2025-05-25 04:08:35 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:35.446772 | orchestrator | 2025-05-25 04:08:35 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:35.448454 | orchestrator | 2025-05-25 04:08:35 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:35.448572 | orchestrator | 2025-05-25 04:08:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:38.494192 | orchestrator | 2025-05-25 04:08:38 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state STARTED 2025-05-25 04:08:38.497082 | orchestrator | 2025-05-25 04:08:38 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:38.497766 | orchestrator | 2025-05-25 04:08:38 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:38.498822 | orchestrator | 2025-05-25 04:08:38 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:38.498876 | orchestrator | 2025-05-25 04:08:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:41.542541 | orchestrator | 2025-05-25 04:08:41 | INFO  | Task a1e9895b-586c-4b2b-8f60-194cbb4d4731 is in state SUCCESS 2025-05-25 04:08:41.542903 | orchestrator | 2025-05-25 04:08:41 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:41.543615 | orchestrator | 2025-05-25 04:08:41 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:41.544254 | orchestrator | 2025-05-25 04:08:41 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:41.545181 | orchestrator | 2025-05-25 04:08:41 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:08:41.546714 | orchestrator | 2025-05-25 04:08:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:44.586532 | orchestrator | 2025-05-25 04:08:44 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:44.587000 | orchestrator | 2025-05-25 04:08:44 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:44.589991 | orchestrator | 2025-05-25 04:08:44 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:44.591119 | orchestrator | 2025-05-25 04:08:44 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:08:44.591196 | orchestrator | 2025-05-25 04:08:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:47.646276 | orchestrator | 2025-05-25 04:08:47 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:47.651224 | orchestrator | 2025-05-25 04:08:47 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:47.652008 | orchestrator | 2025-05-25 04:08:47 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:47.653097 | orchestrator | 2025-05-25 04:08:47 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:08:47.653131 | orchestrator | 2025-05-25 04:08:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:50.709843 | orchestrator | 2025-05-25 04:08:50 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:50.711873 | orchestrator | 2025-05-25 04:08:50 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:50.714246 | orchestrator | 2025-05-25 04:08:50 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:50.715932 | orchestrator | 2025-05-25 04:08:50 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:08:50.716047 | orchestrator | 2025-05-25 04:08:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:53.760382 | orchestrator | 2025-05-25 04:08:53 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:53.760658 | orchestrator | 2025-05-25 04:08:53 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:53.761687 | orchestrator | 2025-05-25 04:08:53 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:53.762359 | orchestrator | 2025-05-25 04:08:53 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:08:53.762386 | orchestrator | 2025-05-25 04:08:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:56.806550 | orchestrator | 2025-05-25 04:08:56 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:56.806709 | orchestrator | 2025-05-25 04:08:56 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:56.807057 | orchestrator | 2025-05-25 04:08:56 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:56.807485 | orchestrator | 2025-05-25 04:08:56 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:08:56.809738 | orchestrator | 2025-05-25 04:08:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:08:59.851778 | orchestrator | 2025-05-25 04:08:59 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:08:59.851880 | orchestrator | 2025-05-25 04:08:59 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:08:59.852333 | orchestrator | 2025-05-25 04:08:59 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:08:59.857120 | orchestrator | 2025-05-25 04:08:59 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:08:59.857189 | orchestrator | 2025-05-25 04:08:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:02.897447 | orchestrator | 2025-05-25 04:09:02 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:09:02.897611 | orchestrator | 2025-05-25 04:09:02 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:02.899937 | orchestrator | 2025-05-25 04:09:02 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:02.900779 | orchestrator | 2025-05-25 04:09:02 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:09:02.900802 | orchestrator | 2025-05-25 04:09:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:05.949971 | orchestrator | 2025-05-25 04:09:05 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:09:05.950107 | orchestrator | 2025-05-25 04:09:05 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:05.950130 | orchestrator | 2025-05-25 04:09:05 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:05.951594 | orchestrator | 2025-05-25 04:09:05 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:09:05.951658 | orchestrator | 2025-05-25 04:09:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:08.997726 | orchestrator | 2025-05-25 04:09:08 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:09:08.997996 | orchestrator | 2025-05-25 04:09:08 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:08.999039 | orchestrator | 2025-05-25 04:09:08 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:08.999660 | orchestrator | 2025-05-25 04:09:08 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:09:08.999766 | orchestrator | 2025-05-25 04:09:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:12.036387 | orchestrator | 2025-05-25 04:09:12 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:09:12.037344 | orchestrator | 2025-05-25 04:09:12 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:12.038437 | orchestrator | 2025-05-25 04:09:12 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:12.039648 | orchestrator | 2025-05-25 04:09:12 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:09:12.039805 | orchestrator | 2025-05-25 04:09:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:15.084365 | orchestrator | 2025-05-25 04:09:15 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:09:15.085394 | orchestrator | 2025-05-25 04:09:15 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:15.087250 | orchestrator | 2025-05-25 04:09:15 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:15.089555 | orchestrator | 2025-05-25 04:09:15 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:09:15.089890 | orchestrator | 2025-05-25 04:09:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:18.136317 | orchestrator | 2025-05-25 04:09:18 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:09:18.136702 | orchestrator | 2025-05-25 04:09:18 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:18.137226 | orchestrator | 2025-05-25 04:09:18 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:18.138007 | orchestrator | 2025-05-25 04:09:18 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:09:18.138113 | orchestrator | 2025-05-25 04:09:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:21.187184 | orchestrator | 2025-05-25 04:09:21 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:09:21.192531 | orchestrator | 2025-05-25 04:09:21 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:21.194193 | orchestrator | 2025-05-25 04:09:21 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:21.197733 | orchestrator | 2025-05-25 04:09:21 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:09:21.198106 | orchestrator | 2025-05-25 04:09:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:24.247395 | orchestrator | 2025-05-25 04:09:24 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:09:24.247502 | orchestrator | 2025-05-25 04:09:24 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:24.248933 | orchestrator | 2025-05-25 04:09:24 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:24.250770 | orchestrator | 2025-05-25 04:09:24 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:09:24.251020 | orchestrator | 2025-05-25 04:09:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:27.295004 | orchestrator | 2025-05-25 04:09:27 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:09:27.295409 | orchestrator | 2025-05-25 04:09:27 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:27.298147 | orchestrator | 2025-05-25 04:09:27 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:27.299138 | orchestrator | 2025-05-25 04:09:27 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:09:27.299298 | orchestrator | 2025-05-25 04:09:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:30.335371 | orchestrator | 2025-05-25 04:09:30 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:09:30.335813 | orchestrator | 2025-05-25 04:09:30 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:30.337333 | orchestrator | 2025-05-25 04:09:30 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:30.338450 | orchestrator | 2025-05-25 04:09:30 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:09:30.338502 | orchestrator | 2025-05-25 04:09:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:33.383936 | orchestrator | 2025-05-25 04:09:33 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:09:33.387528 | orchestrator | 2025-05-25 04:09:33 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:33.389463 | orchestrator | 2025-05-25 04:09:33 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:33.391551 | orchestrator | 2025-05-25 04:09:33 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:09:33.391705 | orchestrator | 2025-05-25 04:09:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:36.451261 | orchestrator | 2025-05-25 04:09:36 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:09:36.452706 | orchestrator | 2025-05-25 04:09:36 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:36.453078 | orchestrator | 2025-05-25 04:09:36 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:36.455204 | orchestrator | 2025-05-25 04:09:36 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:09:36.455310 | orchestrator | 2025-05-25 04:09:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:39.496295 | orchestrator | 2025-05-25 04:09:39 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:09:39.499288 | orchestrator | 2025-05-25 04:09:39 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:39.499754 | orchestrator | 2025-05-25 04:09:39 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:39.500351 | orchestrator | 2025-05-25 04:09:39 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:09:39.500376 | orchestrator | 2025-05-25 04:09:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:42.536075 | orchestrator | 2025-05-25 04:09:42 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:09:42.537123 | orchestrator | 2025-05-25 04:09:42 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:42.538099 | orchestrator | 2025-05-25 04:09:42 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:42.539088 | orchestrator | 2025-05-25 04:09:42 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:09:42.539149 | orchestrator | 2025-05-25 04:09:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:45.581989 | orchestrator | 2025-05-25 04:09:45 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:09:45.582131 | orchestrator | 2025-05-25 04:09:45 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:45.582746 | orchestrator | 2025-05-25 04:09:45 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:45.584271 | orchestrator | 2025-05-25 04:09:45 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:09:45.584628 | orchestrator | 2025-05-25 04:09:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:48.645903 | orchestrator | 2025-05-25 04:09:48 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:09:48.648216 | orchestrator | 2025-05-25 04:09:48 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:48.649157 | orchestrator | 2025-05-25 04:09:48 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:48.650578 | orchestrator | 2025-05-25 04:09:48 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state STARTED 2025-05-25 04:09:48.650637 | orchestrator | 2025-05-25 04:09:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:51.693204 | orchestrator | 2025-05-25 04:09:51 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state STARTED 2025-05-25 04:09:51.695834 | orchestrator | 2025-05-25 04:09:51 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:51.699382 | orchestrator | 2025-05-25 04:09:51 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:51.701763 | orchestrator | 2025-05-25 04:09:51 | INFO  | Task 56c75e97-d2ef-4252-8ec1-2ee0ea9ab855 is in state SUCCESS 2025-05-25 04:09:51.703938 | orchestrator | 2025-05-25 04:09:51.704027 | orchestrator | 2025-05-25 04:09:51.704050 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-25 04:09:51.704071 | orchestrator | 2025-05-25 04:09:51.704107 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-25 04:09:51.704126 | orchestrator | Sunday 25 May 2025 04:07:46 +0000 (0:00:00.097) 0:00:00.097 ************ 2025-05-25 04:09:51.704145 | orchestrator | changed: [localhost] 2025-05-25 04:09:51.704164 | orchestrator | 2025-05-25 04:09:51.704183 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-25 04:09:51.704202 | orchestrator | Sunday 25 May 2025 04:07:48 +0000 (0:00:01.867) 0:00:01.964 ************ 2025-05-25 04:09:51.704221 | orchestrator | changed: [localhost] 2025-05-25 04:09:51.704239 | orchestrator | 2025-05-25 04:09:51.704257 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-25 04:09:51.704276 | orchestrator | Sunday 25 May 2025 04:08:34 +0000 (0:00:46.029) 0:00:47.994 ************ 2025-05-25 04:09:51.704295 | orchestrator | changed: [localhost] 2025-05-25 04:09:51.704313 | orchestrator | 2025-05-25 04:09:51.704331 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:09:51.704350 | orchestrator | 2025-05-25 04:09:51.704368 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:09:51.704387 | orchestrator | Sunday 25 May 2025 04:08:37 +0000 (0:00:03.731) 0:00:51.725 ************ 2025-05-25 04:09:51.704405 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:09:51.704424 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:09:51.704443 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:09:51.704463 | orchestrator | 2025-05-25 04:09:51.704482 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:09:51.704502 | orchestrator | Sunday 25 May 2025 04:08:38 +0000 (0:00:00.348) 0:00:52.073 ************ 2025-05-25 04:09:51.704521 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-25 04:09:51.704541 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-25 04:09:51.704562 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-25 04:09:51.704581 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-25 04:09:51.704643 | orchestrator | 2025-05-25 04:09:51.704664 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-25 04:09:51.704684 | orchestrator | skipping: no hosts matched 2025-05-25 04:09:51.704704 | orchestrator | 2025-05-25 04:09:51.704723 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:09:51.704744 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:09:51.704796 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:09:51.704817 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:09:51.704835 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:09:51.704853 | orchestrator | 2025-05-25 04:09:51.704871 | orchestrator | 2025-05-25 04:09:51.704890 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:09:51.704909 | orchestrator | Sunday 25 May 2025 04:08:39 +0000 (0:00:01.077) 0:00:53.150 ************ 2025-05-25 04:09:51.704928 | orchestrator | =============================================================================== 2025-05-25 04:09:51.704946 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 46.03s 2025-05-25 04:09:51.704965 | orchestrator | Download ironic-agent kernel -------------------------------------------- 3.73s 2025-05-25 04:09:51.705003 | orchestrator | Ensure the destination directory exists --------------------------------- 1.87s 2025-05-25 04:09:51.705022 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.08s 2025-05-25 04:09:51.705041 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-05-25 04:09:51.705059 | orchestrator | 2025-05-25 04:09:51.705078 | orchestrator | 2025-05-25 04:09:51.705096 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:09:51.705114 | orchestrator | 2025-05-25 04:09:51.705133 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:09:51.705152 | orchestrator | Sunday 25 May 2025 04:08:43 +0000 (0:00:00.256) 0:00:00.256 ************ 2025-05-25 04:09:51.705171 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:09:51.705190 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:09:51.705208 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:09:51.705227 | orchestrator | 2025-05-25 04:09:51.705245 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:09:51.705264 | orchestrator | Sunday 25 May 2025 04:08:44 +0000 (0:00:00.286) 0:00:00.543 ************ 2025-05-25 04:09:51.705282 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-25 04:09:51.705301 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-25 04:09:51.705320 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-25 04:09:51.705338 | orchestrator | 2025-05-25 04:09:51.705357 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-25 04:09:51.705375 | orchestrator | 2025-05-25 04:09:51.705394 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-25 04:09:51.705412 | orchestrator | Sunday 25 May 2025 04:08:44 +0000 (0:00:00.373) 0:00:00.916 ************ 2025-05-25 04:09:51.705431 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:09:51.705450 | orchestrator | 2025-05-25 04:09:51.705468 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-25 04:09:51.705488 | orchestrator | Sunday 25 May 2025 04:08:44 +0000 (0:00:00.461) 0:00:01.378 ************ 2025-05-25 04:09:51.705532 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-25 04:09:51.705551 | orchestrator | 2025-05-25 04:09:51.705570 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-25 04:09:51.705588 | orchestrator | Sunday 25 May 2025 04:08:48 +0000 (0:00:03.313) 0:00:04.692 ************ 2025-05-25 04:09:51.705636 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-25 04:09:51.705656 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-25 04:09:51.705675 | orchestrator | 2025-05-25 04:09:51.705694 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-25 04:09:51.705727 | orchestrator | Sunday 25 May 2025 04:08:54 +0000 (0:00:06.152) 0:00:10.844 ************ 2025-05-25 04:09:51.705745 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-25 04:09:51.705765 | orchestrator | 2025-05-25 04:09:51.705783 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-25 04:09:51.705802 | orchestrator | Sunday 25 May 2025 04:08:57 +0000 (0:00:03.265) 0:00:14.109 ************ 2025-05-25 04:09:51.705821 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-25 04:09:51.705840 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-25 04:09:51.705858 | orchestrator | 2025-05-25 04:09:51.705877 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-25 04:09:51.705896 | orchestrator | Sunday 25 May 2025 04:09:01 +0000 (0:00:03.742) 0:00:17.851 ************ 2025-05-25 04:09:51.705915 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-25 04:09:51.705932 | orchestrator | 2025-05-25 04:09:51.705950 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-25 04:09:51.705969 | orchestrator | Sunday 25 May 2025 04:09:04 +0000 (0:00:03.394) 0:00:21.246 ************ 2025-05-25 04:09:51.705988 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-25 04:09:51.706006 | orchestrator | 2025-05-25 04:09:51.706092 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-25 04:09:51.706112 | orchestrator | Sunday 25 May 2025 04:09:08 +0000 (0:00:03.879) 0:00:25.126 ************ 2025-05-25 04:09:51.706131 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:51.706151 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:51.706171 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:51.706190 | orchestrator | 2025-05-25 04:09:51.706210 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-25 04:09:51.706230 | orchestrator | Sunday 25 May 2025 04:09:09 +0000 (0:00:00.314) 0:00:25.440 ************ 2025-05-25 04:09:51.706253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 04:09:51.706280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 04:09:51.706316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 04:09:51.706348 | orchestrator | 2025-05-25 04:09:51.706370 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-25 04:09:51.706390 | orchestrator | Sunday 25 May 2025 04:09:10 +0000 (0:00:01.015) 0:00:26.456 ************ 2025-05-25 04:09:51.706410 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:51.706430 | orchestrator | 2025-05-25 04:09:51.706453 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-25 04:09:51.706474 | orchestrator | Sunday 25 May 2025 04:09:10 +0000 (0:00:00.137) 0:00:26.594 ************ 2025-05-25 04:09:51.706495 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:51.706516 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:51.706594 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:51.706688 | orchestrator | 2025-05-25 04:09:51.706704 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-25 04:09:51.706722 | orchestrator | Sunday 25 May 2025 04:09:10 +0000 (0:00:00.551) 0:00:27.145 ************ 2025-05-25 04:09:51.706739 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:09:51.706758 | orchestrator | 2025-05-25 04:09:51.706773 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-25 04:09:51.706789 | orchestrator | Sunday 25 May 2025 04:09:11 +0000 (0:00:00.553) 0:00:27.699 ************ 2025-05-25 04:09:51.706808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 04:09:51.706838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 04:09:51.706857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 04:09:51.706887 | orchestrator | 2025-05-25 04:09:51.706917 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-25 04:09:51.706934 | orchestrator | Sunday 25 May 2025 04:09:12 +0000 (0:00:01.310) 0:00:29.010 ************ 2025-05-25 04:09:51.706951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-25 04:09:51.706968 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:51.706985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-25 04:09:51.707003 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:51.707028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-25 04:09:51.707045 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:51.707088 | orchestrator | 2025-05-25 04:09:51.707104 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-25 04:09:51.707122 | orchestrator | Sunday 25 May 2025 04:09:13 +0000 (0:00:00.584) 0:00:29.594 ************ 2025-05-25 04:09:51.707140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-25 04:09:51.707158 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:51.707189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-25 04:09:51.707204 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:51.707215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-25 04:09:51.707225 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:51.707235 | orchestrator | 2025-05-25 04:09:51.707244 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-25 04:09:51.707255 | orchestrator | Sunday 25 May 2025 04:09:13 +0000 (0:00:00.643) 0:00:30.237 ************ 2025-05-25 04:09:51.707270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 04:09:51.707292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 04:09:51.707311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 04:09:51.707322 | orchestrator | 2025-05-25 04:09:51.707332 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-25 04:09:51.707348 | orchestrator | Sunday 25 May 2025 04:09:15 +0000 (0:00:01.259) 0:00:31.496 ************ 2025-05-25 04:09:51.707366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 04:09:51.707384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 04:09:51.707427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 04:09:51.707441 | orchestrator | 2025-05-25 04:09:51.707451 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-25 04:09:51.707461 | orchestrator | Sunday 25 May 2025 04:09:17 +0000 (0:00:02.250) 0:00:33.746 ************ 2025-05-25 04:09:51.707471 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-25 04:09:51.707481 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-25 04:09:51.707491 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-25 04:09:51.707501 | orchestrator | 2025-05-25 04:09:51.707511 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-25 04:09:51.707527 | orchestrator | Sunday 25 May 2025 04:09:18 +0000 (0:00:01.635) 0:00:35.382 ************ 2025-05-25 04:09:51.707537 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:09:51.707548 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:09:51.707557 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:09:51.707567 | orchestrator | 2025-05-25 04:09:51.707577 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-25 04:09:51.707587 | orchestrator | Sunday 25 May 2025 04:09:20 +0000 (0:00:01.298) 0:00:36.681 ************ 2025-05-25 04:09:51.707620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-25 04:09:51.707631 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:51.707642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-25 04:09:51.707659 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:51.707674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-25 04:09:51.707685 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:51.707695 | orchestrator | 2025-05-25 04:09:51.707704 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-25 04:09:51.707714 | orchestrator | Sunday 25 May 2025 04:09:20 +0000 (0:00:00.461) 0:00:37.142 ************ 2025-05-25 04:09:51.707731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 04:09:51.707743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 04:09:51.707753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 04:09:51.707770 | orchestrator | 2025-05-25 04:09:51.707780 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-25 04:09:51.707789 | orchestrator | Sunday 25 May 2025 04:09:21 +0000 (0:00:01.237) 0:00:38.380 ************ 2025-05-25 04:09:51.707799 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:09:51.707808 | orchestrator | 2025-05-25 04:09:51.707818 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-25 04:09:51.707828 | orchestrator | Sunday 25 May 2025 04:09:23 +0000 (0:00:02.049) 0:00:40.430 ************ 2025-05-25 04:09:51.707837 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:09:51.707848 | orchestrator | 2025-05-25 04:09:51.707857 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-25 04:09:51.707867 | orchestrator | Sunday 25 May 2025 04:09:25 +0000 (0:00:01.876) 0:00:42.306 ************ 2025-05-25 04:09:51.707877 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:09:51.707886 | orchestrator | 2025-05-25 04:09:51.707896 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-25 04:09:51.707905 | orchestrator | Sunday 25 May 2025 04:09:38 +0000 (0:00:13.041) 0:00:55.348 ************ 2025-05-25 04:09:51.707915 | orchestrator | 2025-05-25 04:09:51.707929 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-25 04:09:51.707940 | orchestrator | Sunday 25 May 2025 04:09:38 +0000 (0:00:00.070) 0:00:55.419 ************ 2025-05-25 04:09:51.707950 | orchestrator | 2025-05-25 04:09:51.707959 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-25 04:09:51.707969 | orchestrator | Sunday 25 May 2025 04:09:39 +0000 (0:00:00.063) 0:00:55.482 ************ 2025-05-25 04:09:51.707978 | orchestrator | 2025-05-25 04:09:51.707988 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-25 04:09:51.707998 | orchestrator | Sunday 25 May 2025 04:09:39 +0000 (0:00:00.061) 0:00:55.544 ************ 2025-05-25 04:09:51.708008 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:09:51.708018 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:09:51.708028 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:09:51.708037 | orchestrator | 2025-05-25 04:09:51.708047 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:09:51.708059 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 04:09:51.708070 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 04:09:51.708080 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 04:09:51.708090 | orchestrator | 2025-05-25 04:09:51.708100 | orchestrator | 2025-05-25 04:09:51.708109 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:09:51.708119 | orchestrator | Sunday 25 May 2025 04:09:50 +0000 (0:00:11.371) 0:01:06.915 ************ 2025-05-25 04:09:51.708129 | orchestrator | =============================================================================== 2025-05-25 04:09:51.708144 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.04s 2025-05-25 04:09:51.708154 | orchestrator | placement : Restart placement-api container ---------------------------- 11.37s 2025-05-25 04:09:51.708164 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.15s 2025-05-25 04:09:51.708174 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.88s 2025-05-25 04:09:51.708184 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.74s 2025-05-25 04:09:51.708199 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.39s 2025-05-25 04:09:51.708209 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.31s 2025-05-25 04:09:51.708224 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.27s 2025-05-25 04:09:51.708242 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.25s 2025-05-25 04:09:51.708260 | orchestrator | placement : Creating placement databases -------------------------------- 2.05s 2025-05-25 04:09:51.708276 | orchestrator | placement : Creating placement databases user and setting permissions --- 1.88s 2025-05-25 04:09:51.708291 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.64s 2025-05-25 04:09:51.708309 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.31s 2025-05-25 04:09:51.708325 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.30s 2025-05-25 04:09:51.708343 | orchestrator | placement : Copying over config.json files for services ----------------- 1.26s 2025-05-25 04:09:51.708359 | orchestrator | placement : Check placement containers ---------------------------------- 1.24s 2025-05-25 04:09:51.708376 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.02s 2025-05-25 04:09:51.708391 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.64s 2025-05-25 04:09:51.708408 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.58s 2025-05-25 04:09:51.708424 | orchestrator | placement : include_tasks ----------------------------------------------- 0.55s 2025-05-25 04:09:51.708441 | orchestrator | 2025-05-25 04:09:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:54.764552 | orchestrator | 2025-05-25 04:09:54 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:09:54.766443 | orchestrator | 2025-05-25 04:09:54 | INFO  | Task 9259ea8e-37b5-449f-a04a-bd0975550f8c is in state SUCCESS 2025-05-25 04:09:54.767947 | orchestrator | 2025-05-25 04:09:54.767990 | orchestrator | 2025-05-25 04:09:54.768633 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:09:54.768665 | orchestrator | 2025-05-25 04:09:54.768677 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:09:54.768690 | orchestrator | Sunday 25 May 2025 04:05:41 +0000 (0:00:00.307) 0:00:00.307 ************ 2025-05-25 04:09:54.768702 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:09:54.768933 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:09:54.768951 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:09:54.768963 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:09:54.768974 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:09:54.768985 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:09:54.768996 | orchestrator | 2025-05-25 04:09:54.769007 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:09:54.769018 | orchestrator | Sunday 25 May 2025 04:05:42 +0000 (0:00:00.860) 0:00:01.167 ************ 2025-05-25 04:09:54.769029 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-25 04:09:54.769041 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-25 04:09:54.769052 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-25 04:09:54.769082 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-25 04:09:54.769101 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-25 04:09:54.769119 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-25 04:09:54.769136 | orchestrator | 2025-05-25 04:09:54.769155 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-25 04:09:54.769173 | orchestrator | 2025-05-25 04:09:54.769190 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-25 04:09:54.769209 | orchestrator | Sunday 25 May 2025 04:05:42 +0000 (0:00:00.774) 0:00:01.942 ************ 2025-05-25 04:09:54.769228 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 04:09:54.769276 | orchestrator | 2025-05-25 04:09:54.769296 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-25 04:09:54.769309 | orchestrator | Sunday 25 May 2025 04:05:44 +0000 (0:00:01.383) 0:00:03.325 ************ 2025-05-25 04:09:54.769320 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:09:54.769331 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:09:54.769342 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:09:54.769353 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:09:54.769363 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:09:54.769375 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:09:54.769386 | orchestrator | 2025-05-25 04:09:54.769397 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-25 04:09:54.769408 | orchestrator | Sunday 25 May 2025 04:05:45 +0000 (0:00:01.217) 0:00:04.542 ************ 2025-05-25 04:09:54.769419 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:09:54.769429 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:09:54.769440 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:09:54.769451 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:09:54.769461 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:09:54.769472 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:09:54.769482 | orchestrator | 2025-05-25 04:09:54.769493 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-25 04:09:54.769504 | orchestrator | Sunday 25 May 2025 04:05:46 +0000 (0:00:01.064) 0:00:05.606 ************ 2025-05-25 04:09:54.769515 | orchestrator | ok: [testbed-node-0] => { 2025-05-25 04:09:54.769527 | orchestrator |  "changed": false, 2025-05-25 04:09:54.769537 | orchestrator |  "msg": "All assertions passed" 2025-05-25 04:09:54.769548 | orchestrator | } 2025-05-25 04:09:54.769725 | orchestrator | ok: [testbed-node-1] => { 2025-05-25 04:09:54.769740 | orchestrator |  "changed": false, 2025-05-25 04:09:54.769753 | orchestrator |  "msg": "All assertions passed" 2025-05-25 04:09:54.769766 | orchestrator | } 2025-05-25 04:09:54.769778 | orchestrator | ok: [testbed-node-2] => { 2025-05-25 04:09:54.769789 | orchestrator |  "changed": false, 2025-05-25 04:09:54.769800 | orchestrator |  "msg": "All assertions passed" 2025-05-25 04:09:54.769811 | orchestrator | } 2025-05-25 04:09:54.769821 | orchestrator | ok: [testbed-node-3] => { 2025-05-25 04:09:54.769832 | orchestrator |  "changed": false, 2025-05-25 04:09:54.769843 | orchestrator |  "msg": "All assertions passed" 2025-05-25 04:09:54.769854 | orchestrator | } 2025-05-25 04:09:54.769864 | orchestrator | ok: [testbed-node-4] => { 2025-05-25 04:09:54.769875 | orchestrator |  "changed": false, 2025-05-25 04:09:54.769886 | orchestrator |  "msg": "All assertions passed" 2025-05-25 04:09:54.769897 | orchestrator | } 2025-05-25 04:09:54.769908 | orchestrator | ok: [testbed-node-5] => { 2025-05-25 04:09:54.769919 | orchestrator |  "changed": false, 2025-05-25 04:09:54.769929 | orchestrator |  "msg": "All assertions passed" 2025-05-25 04:09:54.769940 | orchestrator | } 2025-05-25 04:09:54.769951 | orchestrator | 2025-05-25 04:09:54.769962 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-25 04:09:54.769973 | orchestrator | Sunday 25 May 2025 04:05:47 +0000 (0:00:00.738) 0:00:06.345 ************ 2025-05-25 04:09:54.769984 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.769995 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.770006 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.770144 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.770163 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.770174 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.770185 | orchestrator | 2025-05-25 04:09:54.770196 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-25 04:09:54.770207 | orchestrator | Sunday 25 May 2025 04:05:47 +0000 (0:00:00.592) 0:00:06.938 ************ 2025-05-25 04:09:54.770218 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-25 04:09:54.770243 | orchestrator | 2025-05-25 04:09:54.770254 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-25 04:09:54.770265 | orchestrator | Sunday 25 May 2025 04:05:51 +0000 (0:00:03.166) 0:00:10.105 ************ 2025-05-25 04:09:54.770276 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-25 04:09:54.770288 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-25 04:09:54.770299 | orchestrator | 2025-05-25 04:09:54.770355 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-25 04:09:54.770391 | orchestrator | Sunday 25 May 2025 04:05:57 +0000 (0:00:06.025) 0:00:16.130 ************ 2025-05-25 04:09:54.770403 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-25 04:09:54.770414 | orchestrator | 2025-05-25 04:09:54.770424 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-25 04:09:54.770435 | orchestrator | Sunday 25 May 2025 04:06:00 +0000 (0:00:02.972) 0:00:19.103 ************ 2025-05-25 04:09:54.770446 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-25 04:09:54.770457 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-25 04:09:54.770468 | orchestrator | 2025-05-25 04:09:54.770479 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-25 04:09:54.770489 | orchestrator | Sunday 25 May 2025 04:06:03 +0000 (0:00:03.665) 0:00:22.768 ************ 2025-05-25 04:09:54.770500 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-25 04:09:54.770511 | orchestrator | 2025-05-25 04:09:54.770521 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-25 04:09:54.770577 | orchestrator | Sunday 25 May 2025 04:06:06 +0000 (0:00:03.178) 0:00:25.946 ************ 2025-05-25 04:09:54.770589 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-25 04:09:54.770620 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-25 04:09:54.770632 | orchestrator | 2025-05-25 04:09:54.770643 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-25 04:09:54.770654 | orchestrator | Sunday 25 May 2025 04:06:14 +0000 (0:00:07.451) 0:00:33.398 ************ 2025-05-25 04:09:54.770664 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.770675 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.770692 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.770712 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.770731 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.770750 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.770769 | orchestrator | 2025-05-25 04:09:54.770789 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-25 04:09:54.770811 | orchestrator | Sunday 25 May 2025 04:06:15 +0000 (0:00:00.704) 0:00:34.103 ************ 2025-05-25 04:09:54.770832 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.770854 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.770875 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.770896 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.770911 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.770925 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.770937 | orchestrator | 2025-05-25 04:09:54.770950 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-25 04:09:54.770963 | orchestrator | Sunday 25 May 2025 04:06:17 +0000 (0:00:02.787) 0:00:36.890 ************ 2025-05-25 04:09:54.770976 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:09:54.770988 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:09:54.770999 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:09:54.771010 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:09:54.771020 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:09:54.771031 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:09:54.771042 | orchestrator | 2025-05-25 04:09:54.771053 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-25 04:09:54.771075 | orchestrator | Sunday 25 May 2025 04:06:18 +0000 (0:00:01.097) 0:00:37.988 ************ 2025-05-25 04:09:54.771086 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.771097 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.771107 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.771118 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.771129 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.771139 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.771150 | orchestrator | 2025-05-25 04:09:54.771161 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-25 04:09:54.771171 | orchestrator | Sunday 25 May 2025 04:06:21 +0000 (0:00:02.774) 0:00:40.763 ************ 2025-05-25 04:09:54.771186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.771249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.771270 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-25 04:09:54.771284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-25 04:09:54.771303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.771315 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-25 04:09:54.771326 | orchestrator | 2025-05-25 04:09:54.771337 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-25 04:09:54.771349 | orchestrator | Sunday 25 May 2025 04:06:24 +0000 (0:00:03.213) 0:00:43.976 ************ 2025-05-25 04:09:54.771360 | orchestrator | [WARNING]: Skipped 2025-05-25 04:09:54.771371 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-25 04:09:54.771382 | orchestrator | due to this access issue: 2025-05-25 04:09:54.771394 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-25 04:09:54.771404 | orchestrator | a directory 2025-05-25 04:09:54.771415 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 04:09:54.771426 | orchestrator | 2025-05-25 04:09:54.771438 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-25 04:09:54.771478 | orchestrator | Sunday 25 May 2025 04:06:25 +0000 (0:00:00.919) 0:00:44.895 ************ 2025-05-25 04:09:54.771492 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 04:09:54.771504 | orchestrator | 2025-05-25 04:09:54.771515 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-25 04:09:54.771526 | orchestrator | Sunday 25 May 2025 04:06:26 +0000 (0:00:01.110) 0:00:46.006 ************ 2025-05-25 04:09:54.771543 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-25 04:09:54.771556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.771574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.771586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.771654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-25 04:09:54.771673 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-25 04:09:54.771697 | orchestrator | 2025-05-25 04:09:54.771716 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-25 04:09:54.771734 | orchestrator | Sunday 25 May 2025 04:06:31 +0000 (0:00:04.217) 0:00:50.223 ************ 2025-05-25 04:09:54.771752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.771771 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.771790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.771810 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.771829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.771902 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.771926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.771970 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.771990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.772011 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.772025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.772036 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.772047 | orchestrator | 2025-05-25 04:09:54.772058 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-25 04:09:54.772068 | orchestrator | Sunday 25 May 2025 04:06:33 +0000 (0:00:02.660) 0:00:52.884 ************ 2025-05-25 04:09:54.772080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.772091 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.772139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.772153 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.772170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.772195 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.772206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.772218 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.772229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.772240 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.772252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.772263 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.772273 | orchestrator | 2025-05-25 04:09:54.772285 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-25 04:09:54.772295 | orchestrator | Sunday 25 May 2025 04:06:36 +0000 (0:00:02.691) 0:00:55.575 ************ 2025-05-25 04:09:54.772306 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.772317 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.772328 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.772338 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.772349 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.772360 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.772371 | orchestrator | 2025-05-25 04:09:54.772389 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-25 04:09:54.772406 | orchestrator | Sunday 25 May 2025 04:06:39 +0000 (0:00:02.721) 0:00:58.297 ************ 2025-05-25 04:09:54.772417 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.772428 | orchestrator | 2025-05-25 04:09:54.772439 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-25 04:09:54.772450 | orchestrator | Sunday 25 May 2025 04:06:39 +0000 (0:00:00.106) 0:00:58.403 ************ 2025-05-25 04:09:54.772461 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.772472 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.772483 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.772493 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.772504 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.772515 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.772526 | orchestrator | 2025-05-25 04:09:54.772537 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-25 04:09:54.772548 | orchestrator | Sunday 25 May 2025 04:06:40 +0000 (0:00:00.740) 0:00:59.144 ************ 2025-05-25 04:09:54.772564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.772576 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.772588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.772625 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.772637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.772657 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.772678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.772690 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.772706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.772717 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.772729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.772740 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.772751 | orchestrator | 2025-05-25 04:09:54.772762 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-25 04:09:54.772773 | orchestrator | Sunday 25 May 2025 04:06:42 +0000 (0:00:02.768) 0:01:01.913 ************ 2025-05-25 04:09:54.772784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.772796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.772822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.772839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-25 04:09:54.772852 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-25 04:09:54.772864 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-25 04:09:54.772875 | orchestrator | 2025-05-25 04:09:54.772886 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-25 04:09:54.772897 | orchestrator | Sunday 25 May 2025 04:06:46 +0000 (0:00:03.217) 0:01:05.130 ************ 2025-05-25 04:09:54.772908 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-25 04:09:54.772933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.772950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.772963 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-25 04:09:54.772975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.772993 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-25 04:09:54.773004 | orchestrator | 2025-05-25 04:09:54.773015 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-25 04:09:54.773026 | orchestrator | Sunday 25 May 2025 04:06:50 +0000 (0:00:04.708) 0:01:09.839 ************ 2025-05-25 04:09:54.773047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.773064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.773075 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.773087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.773098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.773116 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.773127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.773139 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.773158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.773170 | orchestrator | 2025-05-25 04:09:54.773181 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-25 04:09:54.773192 | orchestrator | Sunday 25 May 2025 04:06:54 +0000 (0:00:03.793) 0:01:13.633 ************ 2025-05-25 04:09:54.773208 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.773219 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.773230 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.773241 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:09:54.773252 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:09:54.773263 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:09:54.773273 | orchestrator | 2025-05-25 04:09:54.773285 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-25 04:09:54.773295 | orchestrator | Sunday 25 May 2025 04:06:58 +0000 (0:00:03.824) 0:01:17.457 ************ 2025-05-25 04:09:54.773307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.773318 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.773336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.773348 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.773359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.773370 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.773389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.773406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.773418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.773442 | orchestrator | 2025-05-25 04:09:54.773453 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-25 04:09:54.773464 | orchestrator | Sunday 25 May 2025 04:07:01 +0000 (0:00:03.282) 0:01:20.739 ************ 2025-05-25 04:09:54.773475 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.773486 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.773497 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.773508 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.773518 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.773529 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.773540 | orchestrator | 2025-05-25 04:09:54.773551 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-25 04:09:54.773562 | orchestrator | Sunday 25 May 2025 04:07:04 +0000 (0:00:02.484) 0:01:23.224 ************ 2025-05-25 04:09:54.773573 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.773584 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.773613 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.773625 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.773636 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.773646 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.773657 | orchestrator | 2025-05-25 04:09:54.773668 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-25 04:09:54.773679 | orchestrator | Sunday 25 May 2025 04:07:06 +0000 (0:00:02.098) 0:01:25.322 ************ 2025-05-25 04:09:54.773690 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.773701 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.773711 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.773722 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.773733 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.773744 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.773755 | orchestrator | 2025-05-25 04:09:54.773765 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-25 04:09:54.773776 | orchestrator | Sunday 25 May 2025 04:07:08 +0000 (0:00:01.777) 0:01:27.100 ************ 2025-05-25 04:09:54.773787 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.773798 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.773809 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.773819 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.773830 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.773841 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.773852 | orchestrator | 2025-05-25 04:09:54.773863 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-25 04:09:54.773874 | orchestrator | Sunday 25 May 2025 04:07:10 +0000 (0:00:02.204) 0:01:29.304 ************ 2025-05-25 04:09:54.773885 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.773895 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.773906 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.773917 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.773928 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.773939 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.773950 | orchestrator | 2025-05-25 04:09:54.773966 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-25 04:09:54.773978 | orchestrator | Sunday 25 May 2025 04:07:12 +0000 (0:00:02.578) 0:01:31.883 ************ 2025-05-25 04:09:54.773989 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.774000 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.774011 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.774083 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.774095 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.774105 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.774116 | orchestrator | 2025-05-25 04:09:54.774127 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-25 04:09:54.774138 | orchestrator | Sunday 25 May 2025 04:07:15 +0000 (0:00:02.757) 0:01:34.641 ************ 2025-05-25 04:09:54.774149 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-25 04:09:54.774160 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.774171 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-25 04:09:54.774182 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.774198 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-25 04:09:54.774210 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.774221 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-25 04:09:54.774232 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.774243 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-25 04:09:54.774254 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.774265 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-25 04:09:54.774276 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.774286 | orchestrator | 2025-05-25 04:09:54.774297 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-25 04:09:54.774308 | orchestrator | Sunday 25 May 2025 04:07:18 +0000 (0:00:02.704) 0:01:37.345 ************ 2025-05-25 04:09:54.774320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.774331 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.774343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.774354 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.774365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.774392 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.774409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.774421 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.774432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.774443 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.774455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.774466 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.774477 | orchestrator | 2025-05-25 04:09:54.774488 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-25 04:09:54.774499 | orchestrator | Sunday 25 May 2025 04:07:21 +0000 (0:00:03.345) 0:01:40.691 ************ 2025-05-25 04:09:54.774510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.774528 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.774549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.774561 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.774577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.774589 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.774620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.774631 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.774642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.774654 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.774665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.774684 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.774695 | orchestrator | 2025-05-25 04:09:54.774706 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-25 04:09:54.774717 | orchestrator | Sunday 25 May 2025 04:07:24 +0000 (0:00:03.091) 0:01:43.783 ************ 2025-05-25 04:09:54.774728 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.774739 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.774750 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.774760 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.774771 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.774788 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.774799 | orchestrator | 2025-05-25 04:09:54.774810 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-25 04:09:54.774821 | orchestrator | Sunday 25 May 2025 04:07:27 +0000 (0:00:02.330) 0:01:46.114 ************ 2025-05-25 04:09:54.774832 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.774843 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.774854 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.774864 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:09:54.774875 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:09:54.774886 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:09:54.774897 | orchestrator | 2025-05-25 04:09:54.774908 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-25 04:09:54.774919 | orchestrator | Sunday 25 May 2025 04:07:32 +0000 (0:00:05.504) 0:01:51.619 ************ 2025-05-25 04:09:54.774929 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.774940 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.774951 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.774962 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.774972 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.774988 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.774999 | orchestrator | 2025-05-25 04:09:54.775010 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-25 04:09:54.775021 | orchestrator | Sunday 25 May 2025 04:07:35 +0000 (0:00:03.353) 0:01:54.972 ************ 2025-05-25 04:09:54.775032 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.775043 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.775054 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.775064 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.775075 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.775086 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.775097 | orchestrator | 2025-05-25 04:09:54.775108 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-25 04:09:54.775119 | orchestrator | Sunday 25 May 2025 04:07:38 +0000 (0:00:02.446) 0:01:57.419 ************ 2025-05-25 04:09:54.775130 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.775140 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.775151 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.775162 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.775172 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.775183 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.775201 | orchestrator | 2025-05-25 04:09:54.775212 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-25 04:09:54.775223 | orchestrator | Sunday 25 May 2025 04:07:41 +0000 (0:00:03.020) 0:02:00.439 ************ 2025-05-25 04:09:54.775233 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.775244 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.775255 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.775265 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.775276 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.775287 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.775298 | orchestrator | 2025-05-25 04:09:54.775309 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-25 04:09:54.775320 | orchestrator | Sunday 25 May 2025 04:07:43 +0000 (0:00:02.100) 0:02:02.539 ************ 2025-05-25 04:09:54.775331 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.775341 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.775352 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.775363 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.775374 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.775384 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.775395 | orchestrator | 2025-05-25 04:09:54.775406 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-25 04:09:54.775417 | orchestrator | Sunday 25 May 2025 04:07:46 +0000 (0:00:02.567) 0:02:05.107 ************ 2025-05-25 04:09:54.775428 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.775438 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.775449 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.775460 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.775471 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.775481 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.775492 | orchestrator | 2025-05-25 04:09:54.775503 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-25 04:09:54.775514 | orchestrator | Sunday 25 May 2025 04:07:49 +0000 (0:00:02.950) 0:02:08.058 ************ 2025-05-25 04:09:54.775525 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.775535 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.775546 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.775557 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.775568 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.775578 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.775589 | orchestrator | 2025-05-25 04:09:54.775629 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-25 04:09:54.775640 | orchestrator | Sunday 25 May 2025 04:07:52 +0000 (0:00:03.098) 0:02:11.156 ************ 2025-05-25 04:09:54.775651 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.775662 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.775673 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.775684 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.775694 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.775705 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.775716 | orchestrator | 2025-05-25 04:09:54.775727 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-25 04:09:54.775738 | orchestrator | Sunday 25 May 2025 04:07:54 +0000 (0:00:02.407) 0:02:13.564 ************ 2025-05-25 04:09:54.775749 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-25 04:09:54.775761 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.775771 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-25 04:09:54.775783 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.775800 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-25 04:09:54.775811 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.775828 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-25 04:09:54.775839 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.775850 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-25 04:09:54.775861 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.775872 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-25 04:09:54.775883 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.775893 | orchestrator | 2025-05-25 04:09:54.775904 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-25 04:09:54.775915 | orchestrator | Sunday 25 May 2025 04:07:58 +0000 (0:00:03.529) 0:02:17.093 ************ 2025-05-25 04:09:54.775932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.775944 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.775956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.775967 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.775978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 04:09:54.775989 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.776006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.776024 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.776040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.776051 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.776062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 04:09:54.776074 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.776085 | orchestrator | 2025-05-25 04:09:54.776096 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-25 04:09:54.776108 | orchestrator | Sunday 25 May 2025 04:08:00 +0000 (0:00:02.645) 0:02:19.739 ************ 2025-05-25 04:09:54.776128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.776148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.776189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 04:09:54.776222 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-25 04:09:54.776243 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-25 04:09:54.776255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-25 04:09:54.776267 | orchestrator | 2025-05-25 04:09:54.776278 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-25 04:09:54.776289 | orchestrator | Sunday 25 May 2025 04:08:04 +0000 (0:00:03.812) 0:02:23.551 ************ 2025-05-25 04:09:54.776300 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:09:54.776311 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:09:54.776321 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:09:54.776332 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:09:54.776343 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:09:54.776361 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:09:54.776372 | orchestrator | 2025-05-25 04:09:54.776383 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-25 04:09:54.776394 | orchestrator | Sunday 25 May 2025 04:08:05 +0000 (0:00:00.513) 0:02:24.064 ************ 2025-05-25 04:09:54.776405 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:09:54.776415 | orchestrator | 2025-05-25 04:09:54.776426 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-25 04:09:54.776437 | orchestrator | Sunday 25 May 2025 04:08:07 +0000 (0:00:02.228) 0:02:26.293 ************ 2025-05-25 04:09:54.776448 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:09:54.776459 | orchestrator | 2025-05-25 04:09:54.776470 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-25 04:09:54.776481 | orchestrator | Sunday 25 May 2025 04:08:09 +0000 (0:00:02.066) 0:02:28.360 ************ 2025-05-25 04:09:54.776491 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:09:54.776502 | orchestrator | 2025-05-25 04:09:54.776513 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-25 04:09:54.776524 | orchestrator | Sunday 25 May 2025 04:08:52 +0000 (0:00:42.835) 0:03:11.195 ************ 2025-05-25 04:09:54.776535 | orchestrator | 2025-05-25 04:09:54.776545 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-25 04:09:54.776556 | orchestrator | Sunday 25 May 2025 04:08:52 +0000 (0:00:00.065) 0:03:11.261 ************ 2025-05-25 04:09:54.776567 | orchestrator | 2025-05-25 04:09:54.776578 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-25 04:09:54.776644 | orchestrator | Sunday 25 May 2025 04:08:52 +0000 (0:00:00.253) 0:03:11.514 ************ 2025-05-25 04:09:54.776657 | orchestrator | 2025-05-25 04:09:54.776669 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-25 04:09:54.776680 | orchestrator | Sunday 25 May 2025 04:08:52 +0000 (0:00:00.073) 0:03:11.588 ************ 2025-05-25 04:09:54.776690 | orchestrator | 2025-05-25 04:09:54.776701 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-25 04:09:54.776712 | orchestrator | Sunday 25 May 2025 04:08:52 +0000 (0:00:00.065) 0:03:11.654 ************ 2025-05-25 04:09:54.776723 | orchestrator | 2025-05-25 04:09:54.776734 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-25 04:09:54.776744 | orchestrator | Sunday 25 May 2025 04:08:52 +0000 (0:00:00.070) 0:03:11.725 ************ 2025-05-25 04:09:54.776755 | orchestrator | 2025-05-25 04:09:54.776766 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-25 04:09:54.776777 | orchestrator | Sunday 25 May 2025 04:08:52 +0000 (0:00:00.073) 0:03:11.798 ************ 2025-05-25 04:09:54.776790 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:09:54.776809 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:09:54.776835 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:09:54.776852 | orchestrator | 2025-05-25 04:09:54.776868 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-25 04:09:54.776886 | orchestrator | Sunday 25 May 2025 04:09:21 +0000 (0:00:29.145) 0:03:40.944 ************ 2025-05-25 04:09:54.776897 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:09:54.776907 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:09:54.776916 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:09:54.776926 | orchestrator | 2025-05-25 04:09:54.776935 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:09:54.776946 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-25 04:09:54.776957 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-25 04:09:54.776967 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-25 04:09:54.776984 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-25 04:09:54.776994 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-25 04:09:54.777003 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-25 04:09:54.777013 | orchestrator | 2025-05-25 04:09:54.777022 | orchestrator | 2025-05-25 04:09:54.777032 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:09:54.777042 | orchestrator | Sunday 25 May 2025 04:09:52 +0000 (0:00:30.850) 0:04:11.794 ************ 2025-05-25 04:09:54.777051 | orchestrator | =============================================================================== 2025-05-25 04:09:54.777061 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.84s 2025-05-25 04:09:54.777070 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 30.85s 2025-05-25 04:09:54.777080 | orchestrator | neutron : Restart neutron-server container ----------------------------- 29.15s 2025-05-25 04:09:54.777093 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.45s 2025-05-25 04:09:54.777109 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.03s 2025-05-25 04:09:54.777126 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.51s 2025-05-25 04:09:54.777140 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 4.71s 2025-05-25 04:09:54.777150 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.22s 2025-05-25 04:09:54.777159 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.82s 2025-05-25 04:09:54.777169 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.81s 2025-05-25 04:09:54.777178 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.79s 2025-05-25 04:09:54.777188 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.67s 2025-05-25 04:09:54.777197 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.53s 2025-05-25 04:09:54.777207 | orchestrator | neutron : Copying over neutron_ovn_vpn_agent.ini ------------------------ 3.35s 2025-05-25 04:09:54.777216 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 3.35s 2025-05-25 04:09:54.777226 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.28s 2025-05-25 04:09:54.777235 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.22s 2025-05-25 04:09:54.777244 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.21s 2025-05-25 04:09:54.777254 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.18s 2025-05-25 04:09:54.777263 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.17s 2025-05-25 04:09:54.777279 | orchestrator | 2025-05-25 04:09:54 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:54.777289 | orchestrator | 2025-05-25 04:09:54 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:54.777299 | orchestrator | 2025-05-25 04:09:54 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:09:54.777308 | orchestrator | 2025-05-25 04:09:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:09:57.838850 | orchestrator | 2025-05-25 04:09:57 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:09:57.838960 | orchestrator | 2025-05-25 04:09:57 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state STARTED 2025-05-25 04:09:57.839719 | orchestrator | 2025-05-25 04:09:57 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:09:57.842453 | orchestrator | 2025-05-25 04:09:57 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:09:57.842490 | orchestrator | 2025-05-25 04:09:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:00.892805 | orchestrator | 2025-05-25 04:10:00 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:00.892903 | orchestrator | 2025-05-25 04:10:00 | INFO  | Task 9aa85e1d-fb60-467e-9cfc-45d5d1e0ba87 is in state STARTED 2025-05-25 04:10:00.895169 | orchestrator | 2025-05-25 04:10:00 | INFO  | Task 7bb98bfa-acf7-43e0-a6a2-e5632221fbeb is in state SUCCESS 2025-05-25 04:10:00.897626 | orchestrator | 2025-05-25 04:10:00.897826 | orchestrator | 2025-05-25 04:10:00.897846 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:10:00.897857 | orchestrator | 2025-05-25 04:10:00.897867 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:10:00.898280 | orchestrator | Sunday 25 May 2025 04:06:57 +0000 (0:00:00.239) 0:00:00.239 ************ 2025-05-25 04:10:00.898301 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:10:00.898312 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:10:00.898322 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:10:00.899045 | orchestrator | 2025-05-25 04:10:00.899087 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:10:00.899098 | orchestrator | Sunday 25 May 2025 04:06:58 +0000 (0:00:00.254) 0:00:00.494 ************ 2025-05-25 04:10:00.899109 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-25 04:10:00.899119 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-25 04:10:00.899129 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-25 04:10:00.899138 | orchestrator | 2025-05-25 04:10:00.899148 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-25 04:10:00.899158 | orchestrator | 2025-05-25 04:10:00.899168 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-25 04:10:00.899178 | orchestrator | Sunday 25 May 2025 04:06:58 +0000 (0:00:00.341) 0:00:00.835 ************ 2025-05-25 04:10:00.899188 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:10:00.899198 | orchestrator | 2025-05-25 04:10:00.899208 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-25 04:10:00.899218 | orchestrator | Sunday 25 May 2025 04:06:59 +0000 (0:00:00.501) 0:00:01.337 ************ 2025-05-25 04:10:00.899228 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-25 04:10:00.899237 | orchestrator | 2025-05-25 04:10:00.899247 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-25 04:10:00.899256 | orchestrator | Sunday 25 May 2025 04:07:02 +0000 (0:00:03.318) 0:00:04.655 ************ 2025-05-25 04:10:00.899266 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-25 04:10:00.899276 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-25 04:10:00.899286 | orchestrator | 2025-05-25 04:10:00.899296 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-25 04:10:00.899305 | orchestrator | Sunday 25 May 2025 04:07:08 +0000 (0:00:06.216) 0:00:10.872 ************ 2025-05-25 04:10:00.899315 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-25 04:10:00.899325 | orchestrator | 2025-05-25 04:10:00.899336 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-25 04:10:00.899352 | orchestrator | Sunday 25 May 2025 04:07:11 +0000 (0:00:03.337) 0:00:14.209 ************ 2025-05-25 04:10:00.899378 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-25 04:10:00.899396 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-25 04:10:00.899443 | orchestrator | 2025-05-25 04:10:00.899461 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-25 04:10:00.899478 | orchestrator | Sunday 25 May 2025 04:07:15 +0000 (0:00:03.924) 0:00:18.134 ************ 2025-05-25 04:10:00.899493 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-25 04:10:00.899510 | orchestrator | 2025-05-25 04:10:00.899533 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-25 04:10:00.899550 | orchestrator | Sunday 25 May 2025 04:07:19 +0000 (0:00:03.884) 0:00:22.018 ************ 2025-05-25 04:10:00.899565 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-25 04:10:00.899581 | orchestrator | 2025-05-25 04:10:00.899680 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-25 04:10:00.899693 | orchestrator | Sunday 25 May 2025 04:07:24 +0000 (0:00:04.452) 0:00:26.471 ************ 2025-05-25 04:10:00.899722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 04:10:00.899792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 04:10:00.899806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 04:10:00.899819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.899846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.899859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.899888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.899927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.899941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.899952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.899965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.899984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.899995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900091 | orchestrator | 2025-05-25 04:10:00.900101 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-25 04:10:00.900111 | orchestrator | Sunday 25 May 2025 04:07:27 +0000 (0:00:02.985) 0:00:29.456 ************ 2025-05-25 04:10:00.900121 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:10:00.900131 | orchestrator | 2025-05-25 04:10:00.900141 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-25 04:10:00.900230 | orchestrator | Sunday 25 May 2025 04:07:27 +0000 (0:00:00.212) 0:00:29.669 ************ 2025-05-25 04:10:00.900242 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:10:00.900252 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:10:00.900262 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:10:00.900272 | orchestrator | 2025-05-25 04:10:00.900281 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-25 04:10:00.900290 | orchestrator | Sunday 25 May 2025 04:07:28 +0000 (0:00:00.679) 0:00:30.348 ************ 2025-05-25 04:10:00.900300 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:10:00.900310 | orchestrator | 2025-05-25 04:10:00.900319 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-25 04:10:00.900329 | orchestrator | Sunday 25 May 2025 04:07:30 +0000 (0:00:01.976) 0:00:32.325 ************ 2025-05-25 04:10:00.900344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 04:10:00.900386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 04:10:00.900398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 04:10:00.900419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.900741 | orchestrator | 2025-05-25 04:10:00.900750 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-25 04:10:00.900760 | orchestrator | Sunday 25 May 2025 04:07:36 +0000 (0:00:06.736) 0:00:39.062 ************ 2025-05-25 04:10:00.900770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 04:10:00.900785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 04:10:00.900820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.900839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.900849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.900859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.900869 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:10:00.900879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 04:10:00.900895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 04:10:00.900931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.900950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.900960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.900970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.900980 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:10:00.900990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 04:10:00.901004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 04:10:00.901041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.901060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.901070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.901080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.901090 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:10:00.901100 | orchestrator | 2025-05-25 04:10:00.901110 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-25 04:10:00.901120 | orchestrator | Sunday 25 May 2025 04:07:38 +0000 (0:00:01.585) 0:00:40.648 ************ 2025-05-25 04:10:00.901130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 04:10:00.901145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 04:10:00.901186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.901198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.901208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.901218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.901228 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:10:00.901238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 04:10:00.901248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 04:10:00.901294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.901306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.901316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.901327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.901337 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:10:00.901347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 04:10:00.901357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 04:10:00.901381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.901418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.901439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.901458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.901475 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:10:00.901493 | orchestrator | 2025-05-25 04:10:00.901512 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-25 04:10:00.901529 | orchestrator | Sunday 25 May 2025 04:07:40 +0000 (0:00:01.932) 0:00:42.580 ************ 2025-05-25 04:10:00.901543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 04:10:00.901559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 04:10:00.901634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 04:10:00.901649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901875 | orchestrator | 2025-05-25 04:10:00.901885 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-25 04:10:00.901895 | orchestrator | Sunday 25 May 2025 04:07:46 +0000 (0:00:06.324) 0:00:48.905 ************ 2025-05-25 04:10:00.901904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 04:10:00.901915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 04:10:00.901936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 04:10:00.901951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.901998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902178 | orchestrator | 2025-05-25 04:10:00.902188 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-25 04:10:00.902197 | orchestrator | Sunday 25 May 2025 04:08:05 +0000 (0:00:18.571) 0:01:07.477 ************ 2025-05-25 04:10:00.902207 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-25 04:10:00.902217 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-25 04:10:00.902227 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-25 04:10:00.902236 | orchestrator | 2025-05-25 04:10:00.902246 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-25 04:10:00.902256 | orchestrator | Sunday 25 May 2025 04:08:09 +0000 (0:00:04.049) 0:01:11.527 ************ 2025-05-25 04:10:00.902265 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-25 04:10:00.902275 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-25 04:10:00.902290 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-25 04:10:00.902300 | orchestrator | 2025-05-25 04:10:00.902309 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-25 04:10:00.902319 | orchestrator | Sunday 25 May 2025 04:08:11 +0000 (0:00:02.362) 0:01:13.889 ************ 2025-05-25 04:10:00.902329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 04:10:00.902344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 04:10:00.902386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 04:10:00.902410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.902461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.902478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.902496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.902546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.902565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.902662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.902706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.902724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.902750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902799 | orchestrator | 2025-05-25 04:10:00.902809 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-25 04:10:00.902819 | orchestrator | Sunday 25 May 2025 04:08:15 +0000 (0:00:03.519) 0:01:17.409 ************ 2025-05-25 04:10:00.902829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 04:10:00.902839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 04:10:00.902854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 04:10:00.902871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.902897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.902907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.902918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.902931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.902962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.902991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.903017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.903049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.903064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.903088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903155 | orchestrator | 2025-05-25 04:10:00.903165 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-25 04:10:00.903174 | orchestrator | Sunday 25 May 2025 04:08:18 +0000 (0:00:02.901) 0:01:20.310 ************ 2025-05-25 04:10:00.903182 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:10:00.903190 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:10:00.903198 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:10:00.903206 | orchestrator | 2025-05-25 04:10:00.903214 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-25 04:10:00.903221 | orchestrator | Sunday 25 May 2025 04:08:18 +0000 (0:00:00.352) 0:01:20.662 ************ 2025-05-25 04:10:00.903229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 04:10:00.903238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 04:10:00.903246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.903263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 04:10:00.903277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.903285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.903294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 04:10:00.903302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.903310 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:10:00.903318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.903330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.903343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.903357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.903365 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:10:00.903374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 04:10:00.903382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 04:10:00.903390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.903398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.903413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.903430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:10:00.903439 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:10:00.903447 | orchestrator | 2025-05-25 04:10:00.903455 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-25 04:10:00.903463 | orchestrator | Sunday 25 May 2025 04:08:19 +0000 (0:00:00.884) 0:01:21.547 ************ 2025-05-25 04:10:00.903471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 04:10:00.903479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 04:10:00.903488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 04:10:00.903506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:10:00.903686 | orchestrator | 2025-05-25 04:10:00.903694 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-25 04:10:00.903702 | orchestrator | Sunday 25 May 2025 04:08:23 +0000 (0:00:04.540) 0:01:26.087 ************ 2025-05-25 04:10:00.903710 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:10:00.903718 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:10:00.903726 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:10:00.903733 | orchestrator | 2025-05-25 04:10:00.903741 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-25 04:10:00.903749 | orchestrator | Sunday 25 May 2025 04:08:24 +0000 (0:00:00.291) 0:01:26.378 ************ 2025-05-25 04:10:00.903757 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-25 04:10:00.903765 | orchestrator | 2025-05-25 04:10:00.903773 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-25 04:10:00.903781 | orchestrator | Sunday 25 May 2025 04:08:26 +0000 (0:00:02.310) 0:01:28.689 ************ 2025-05-25 04:10:00.903789 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-25 04:10:00.903797 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-25 04:10:00.903805 | orchestrator | 2025-05-25 04:10:00.903813 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-25 04:10:00.903820 | orchestrator | Sunday 25 May 2025 04:08:28 +0000 (0:00:02.028) 0:01:30.717 ************ 2025-05-25 04:10:00.903828 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:10:00.903836 | orchestrator | 2025-05-25 04:10:00.903844 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-25 04:10:00.903852 | orchestrator | Sunday 25 May 2025 04:08:46 +0000 (0:00:18.459) 0:01:49.177 ************ 2025-05-25 04:10:00.903859 | orchestrator | 2025-05-25 04:10:00.903867 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-25 04:10:00.903875 | orchestrator | Sunday 25 May 2025 04:08:46 +0000 (0:00:00.066) 0:01:49.244 ************ 2025-05-25 04:10:00.903883 | orchestrator | 2025-05-25 04:10:00.903890 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-25 04:10:00.903898 | orchestrator | Sunday 25 May 2025 04:08:47 +0000 (0:00:00.085) 0:01:49.329 ************ 2025-05-25 04:10:00.903906 | orchestrator | 2025-05-25 04:10:00.903914 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-25 04:10:00.903922 | orchestrator | Sunday 25 May 2025 04:08:47 +0000 (0:00:00.082) 0:01:49.411 ************ 2025-05-25 04:10:00.903929 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:10:00.903937 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:10:00.903945 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:10:00.903957 | orchestrator | 2025-05-25 04:10:00.903965 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-25 04:10:00.903973 | orchestrator | Sunday 25 May 2025 04:08:59 +0000 (0:00:12.463) 0:02:01.874 ************ 2025-05-25 04:10:00.903980 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:10:00.903988 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:10:00.903996 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:10:00.904004 | orchestrator | 2025-05-25 04:10:00.904011 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-25 04:10:00.904019 | orchestrator | Sunday 25 May 2025 04:09:05 +0000 (0:00:05.758) 0:02:07.633 ************ 2025-05-25 04:10:00.904027 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:10:00.904035 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:10:00.904043 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:10:00.904050 | orchestrator | 2025-05-25 04:10:00.904058 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-25 04:10:00.904066 | orchestrator | Sunday 25 May 2025 04:09:16 +0000 (0:00:11.022) 0:02:18.655 ************ 2025-05-25 04:10:00.904074 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:10:00.904081 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:10:00.904089 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:10:00.904097 | orchestrator | 2025-05-25 04:10:00.904104 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-25 04:10:00.904112 | orchestrator | Sunday 25 May 2025 04:09:26 +0000 (0:00:10.620) 0:02:29.275 ************ 2025-05-25 04:10:00.904120 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:10:00.904128 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:10:00.904135 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:10:00.904143 | orchestrator | 2025-05-25 04:10:00.904151 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-25 04:10:00.904159 | orchestrator | Sunday 25 May 2025 04:09:38 +0000 (0:00:11.577) 0:02:40.853 ************ 2025-05-25 04:10:00.904166 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:10:00.904174 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:10:00.904182 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:10:00.904189 | orchestrator | 2025-05-25 04:10:00.904197 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-25 04:10:00.904205 | orchestrator | Sunday 25 May 2025 04:09:50 +0000 (0:00:12.248) 0:02:53.102 ************ 2025-05-25 04:10:00.904212 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:10:00.904220 | orchestrator | 2025-05-25 04:10:00.904228 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:10:00.904240 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-25 04:10:00.904248 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 04:10:00.904256 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 04:10:00.904264 | orchestrator | 2025-05-25 04:10:00.904272 | orchestrator | 2025-05-25 04:10:00.904284 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:10:00.904292 | orchestrator | Sunday 25 May 2025 04:09:57 +0000 (0:00:07.020) 0:03:00.122 ************ 2025-05-25 04:10:00.904300 | orchestrator | =============================================================================== 2025-05-25 04:10:00.904308 | orchestrator | designate : Copying over designate.conf -------------------------------- 18.57s 2025-05-25 04:10:00.904315 | orchestrator | designate : Running Designate bootstrap container ---------------------- 18.46s 2025-05-25 04:10:00.904323 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.46s 2025-05-25 04:10:00.904331 | orchestrator | designate : Restart designate-worker container ------------------------- 12.25s 2025-05-25 04:10:00.904345 | orchestrator | designate : Restart designate-mdns container --------------------------- 11.58s 2025-05-25 04:10:00.904353 | orchestrator | designate : Restart designate-central container ------------------------ 11.02s 2025-05-25 04:10:00.904361 | orchestrator | designate : Restart designate-producer container ----------------------- 10.62s 2025-05-25 04:10:00.904368 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.02s 2025-05-25 04:10:00.904376 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.74s 2025-05-25 04:10:00.904384 | orchestrator | designate : Copying over config.json files for services ----------------- 6.32s 2025-05-25 04:10:00.904392 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.22s 2025-05-25 04:10:00.904399 | orchestrator | designate : Restart designate-api container ----------------------------- 5.76s 2025-05-25 04:10:00.904407 | orchestrator | designate : Check designate containers ---------------------------------- 4.54s 2025-05-25 04:10:00.904415 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.45s 2025-05-25 04:10:00.904423 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.05s 2025-05-25 04:10:00.904430 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.92s 2025-05-25 04:10:00.904438 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.88s 2025-05-25 04:10:00.904446 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.52s 2025-05-25 04:10:00.904454 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.34s 2025-05-25 04:10:00.904462 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.32s 2025-05-25 04:10:00.904469 | orchestrator | 2025-05-25 04:10:00 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:00.905657 | orchestrator | 2025-05-25 04:10:00 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:00.905691 | orchestrator | 2025-05-25 04:10:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:03.968394 | orchestrator | 2025-05-25 04:10:03 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:03.971542 | orchestrator | 2025-05-25 04:10:03 | INFO  | Task 9aa85e1d-fb60-467e-9cfc-45d5d1e0ba87 is in state SUCCESS 2025-05-25 04:10:03.971665 | orchestrator | 2025-05-25 04:10:03 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:03.973586 | orchestrator | 2025-05-25 04:10:03 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:03.973730 | orchestrator | 2025-05-25 04:10:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:07.021823 | orchestrator | 2025-05-25 04:10:07 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:07.021937 | orchestrator | 2025-05-25 04:10:07 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:07.026931 | orchestrator | 2025-05-25 04:10:07 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:07.028062 | orchestrator | 2025-05-25 04:10:07 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:07.028102 | orchestrator | 2025-05-25 04:10:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:10.079544 | orchestrator | 2025-05-25 04:10:10 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:10.080952 | orchestrator | 2025-05-25 04:10:10 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:10.083905 | orchestrator | 2025-05-25 04:10:10 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:10.085575 | orchestrator | 2025-05-25 04:10:10 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:10.085677 | orchestrator | 2025-05-25 04:10:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:13.153248 | orchestrator | 2025-05-25 04:10:13 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:13.155405 | orchestrator | 2025-05-25 04:10:13 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:13.157706 | orchestrator | 2025-05-25 04:10:13 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:13.159959 | orchestrator | 2025-05-25 04:10:13 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:13.160007 | orchestrator | 2025-05-25 04:10:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:16.199350 | orchestrator | 2025-05-25 04:10:16 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:16.201551 | orchestrator | 2025-05-25 04:10:16 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:16.204333 | orchestrator | 2025-05-25 04:10:16 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:16.205965 | orchestrator | 2025-05-25 04:10:16 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:16.205999 | orchestrator | 2025-05-25 04:10:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:19.258682 | orchestrator | 2025-05-25 04:10:19 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:19.260829 | orchestrator | 2025-05-25 04:10:19 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:19.263823 | orchestrator | 2025-05-25 04:10:19 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:19.265760 | orchestrator | 2025-05-25 04:10:19 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:19.265810 | orchestrator | 2025-05-25 04:10:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:22.316931 | orchestrator | 2025-05-25 04:10:22 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:22.317677 | orchestrator | 2025-05-25 04:10:22 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:22.320441 | orchestrator | 2025-05-25 04:10:22 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:22.321955 | orchestrator | 2025-05-25 04:10:22 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:22.322071 | orchestrator | 2025-05-25 04:10:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:25.373652 | orchestrator | 2025-05-25 04:10:25 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:25.374343 | orchestrator | 2025-05-25 04:10:25 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:25.376134 | orchestrator | 2025-05-25 04:10:25 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:25.377218 | orchestrator | 2025-05-25 04:10:25 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:25.377269 | orchestrator | 2025-05-25 04:10:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:28.422278 | orchestrator | 2025-05-25 04:10:28 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:28.422361 | orchestrator | 2025-05-25 04:10:28 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:28.422398 | orchestrator | 2025-05-25 04:10:28 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:28.422405 | orchestrator | 2025-05-25 04:10:28 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:28.422412 | orchestrator | 2025-05-25 04:10:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:31.513734 | orchestrator | 2025-05-25 04:10:31 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:31.514623 | orchestrator | 2025-05-25 04:10:31 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:31.515705 | orchestrator | 2025-05-25 04:10:31 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:31.517188 | orchestrator | 2025-05-25 04:10:31 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:31.517254 | orchestrator | 2025-05-25 04:10:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:34.552724 | orchestrator | 2025-05-25 04:10:34 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:34.554232 | orchestrator | 2025-05-25 04:10:34 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:34.556044 | orchestrator | 2025-05-25 04:10:34 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:34.557436 | orchestrator | 2025-05-25 04:10:34 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:34.557873 | orchestrator | 2025-05-25 04:10:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:37.602392 | orchestrator | 2025-05-25 04:10:37 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:37.604443 | orchestrator | 2025-05-25 04:10:37 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:37.606362 | orchestrator | 2025-05-25 04:10:37 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:37.608340 | orchestrator | 2025-05-25 04:10:37 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:37.608464 | orchestrator | 2025-05-25 04:10:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:40.652450 | orchestrator | 2025-05-25 04:10:40 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:40.654149 | orchestrator | 2025-05-25 04:10:40 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:40.656849 | orchestrator | 2025-05-25 04:10:40 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:40.658425 | orchestrator | 2025-05-25 04:10:40 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:40.658463 | orchestrator | 2025-05-25 04:10:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:43.695964 | orchestrator | 2025-05-25 04:10:43 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:43.696246 | orchestrator | 2025-05-25 04:10:43 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:43.699761 | orchestrator | 2025-05-25 04:10:43 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:43.700052 | orchestrator | 2025-05-25 04:10:43 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:43.700090 | orchestrator | 2025-05-25 04:10:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:46.737807 | orchestrator | 2025-05-25 04:10:46 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:46.738554 | orchestrator | 2025-05-25 04:10:46 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:46.739519 | orchestrator | 2025-05-25 04:10:46 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:46.740627 | orchestrator | 2025-05-25 04:10:46 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:46.740656 | orchestrator | 2025-05-25 04:10:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:49.789005 | orchestrator | 2025-05-25 04:10:49 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:49.790353 | orchestrator | 2025-05-25 04:10:49 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:49.792934 | orchestrator | 2025-05-25 04:10:49 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:49.795128 | orchestrator | 2025-05-25 04:10:49 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:49.795166 | orchestrator | 2025-05-25 04:10:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:52.843390 | orchestrator | 2025-05-25 04:10:52 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:52.847545 | orchestrator | 2025-05-25 04:10:52 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:52.848225 | orchestrator | 2025-05-25 04:10:52 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:52.849502 | orchestrator | 2025-05-25 04:10:52 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:52.849551 | orchestrator | 2025-05-25 04:10:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:55.893257 | orchestrator | 2025-05-25 04:10:55 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:55.893412 | orchestrator | 2025-05-25 04:10:55 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:55.894657 | orchestrator | 2025-05-25 04:10:55 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:55.896144 | orchestrator | 2025-05-25 04:10:55 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:55.896241 | orchestrator | 2025-05-25 04:10:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:10:58.934881 | orchestrator | 2025-05-25 04:10:58 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:10:58.936069 | orchestrator | 2025-05-25 04:10:58 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:10:58.937385 | orchestrator | 2025-05-25 04:10:58 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:10:58.939107 | orchestrator | 2025-05-25 04:10:58 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:10:58.939195 | orchestrator | 2025-05-25 04:10:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:01.985598 | orchestrator | 2025-05-25 04:11:01 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:11:01.989734 | orchestrator | 2025-05-25 04:11:01 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:01.989798 | orchestrator | 2025-05-25 04:11:01 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:01.991369 | orchestrator | 2025-05-25 04:11:01 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:01.991440 | orchestrator | 2025-05-25 04:11:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:05.038809 | orchestrator | 2025-05-25 04:11:05 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:11:05.039922 | orchestrator | 2025-05-25 04:11:05 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:05.041867 | orchestrator | 2025-05-25 04:11:05 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:05.043860 | orchestrator | 2025-05-25 04:11:05 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:05.043928 | orchestrator | 2025-05-25 04:11:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:08.089180 | orchestrator | 2025-05-25 04:11:08 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:11:08.089673 | orchestrator | 2025-05-25 04:11:08 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:08.090421 | orchestrator | 2025-05-25 04:11:08 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:08.091224 | orchestrator | 2025-05-25 04:11:08 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:08.091285 | orchestrator | 2025-05-25 04:11:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:11.147663 | orchestrator | 2025-05-25 04:11:11 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:11:11.150172 | orchestrator | 2025-05-25 04:11:11 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:11.152305 | orchestrator | 2025-05-25 04:11:11 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:11.154623 | orchestrator | 2025-05-25 04:11:11 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:11.154900 | orchestrator | 2025-05-25 04:11:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:14.210259 | orchestrator | 2025-05-25 04:11:14 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:11:14.210382 | orchestrator | 2025-05-25 04:11:14 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:14.210875 | orchestrator | 2025-05-25 04:11:14 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:14.211904 | orchestrator | 2025-05-25 04:11:14 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:14.212010 | orchestrator | 2025-05-25 04:11:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:17.258449 | orchestrator | 2025-05-25 04:11:17 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:11:17.258637 | orchestrator | 2025-05-25 04:11:17 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:17.259050 | orchestrator | 2025-05-25 04:11:17 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:17.259941 | orchestrator | 2025-05-25 04:11:17 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:17.259980 | orchestrator | 2025-05-25 04:11:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:20.284661 | orchestrator | 2025-05-25 04:11:20 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:11:20.284771 | orchestrator | 2025-05-25 04:11:20 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:20.285009 | orchestrator | 2025-05-25 04:11:20 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:20.285656 | orchestrator | 2025-05-25 04:11:20 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:20.285701 | orchestrator | 2025-05-25 04:11:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:23.323098 | orchestrator | 2025-05-25 04:11:23 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:11:23.323201 | orchestrator | 2025-05-25 04:11:23 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:23.324290 | orchestrator | 2025-05-25 04:11:23 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:23.325048 | orchestrator | 2025-05-25 04:11:23 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:23.325135 | orchestrator | 2025-05-25 04:11:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:26.363086 | orchestrator | 2025-05-25 04:11:26 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:11:26.364145 | orchestrator | 2025-05-25 04:11:26 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:26.367531 | orchestrator | 2025-05-25 04:11:26 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:26.369339 | orchestrator | 2025-05-25 04:11:26 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:26.369377 | orchestrator | 2025-05-25 04:11:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:29.415702 | orchestrator | 2025-05-25 04:11:29 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:11:29.417249 | orchestrator | 2025-05-25 04:11:29 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:29.419929 | orchestrator | 2025-05-25 04:11:29 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:29.422244 | orchestrator | 2025-05-25 04:11:29 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:29.422302 | orchestrator | 2025-05-25 04:11:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:32.466477 | orchestrator | 2025-05-25 04:11:32 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:11:32.468237 | orchestrator | 2025-05-25 04:11:32 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:32.470270 | orchestrator | 2025-05-25 04:11:32 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:32.473596 | orchestrator | 2025-05-25 04:11:32 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:32.473643 | orchestrator | 2025-05-25 04:11:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:35.529493 | orchestrator | 2025-05-25 04:11:35 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:11:35.532640 | orchestrator | 2025-05-25 04:11:35 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:35.535282 | orchestrator | 2025-05-25 04:11:35 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:35.536755 | orchestrator | 2025-05-25 04:11:35 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:35.536794 | orchestrator | 2025-05-25 04:11:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:38.584727 | orchestrator | 2025-05-25 04:11:38 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:11:38.585684 | orchestrator | 2025-05-25 04:11:38 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:38.586451 | orchestrator | 2025-05-25 04:11:38 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:38.587629 | orchestrator | 2025-05-25 04:11:38 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:38.587748 | orchestrator | 2025-05-25 04:11:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:41.639086 | orchestrator | 2025-05-25 04:11:41 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:11:41.642562 | orchestrator | 2025-05-25 04:11:41 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:41.645225 | orchestrator | 2025-05-25 04:11:41 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:41.647224 | orchestrator | 2025-05-25 04:11:41 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:41.647257 | orchestrator | 2025-05-25 04:11:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:44.704418 | orchestrator | 2025-05-25 04:11:44 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:11:44.707579 | orchestrator | 2025-05-25 04:11:44 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:44.709959 | orchestrator | 2025-05-25 04:11:44 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:44.712661 | orchestrator | 2025-05-25 04:11:44 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:44.712713 | orchestrator | 2025-05-25 04:11:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:47.760337 | orchestrator | 2025-05-25 04:11:47 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state STARTED 2025-05-25 04:11:47.762923 | orchestrator | 2025-05-25 04:11:47 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:47.765653 | orchestrator | 2025-05-25 04:11:47 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:47.768077 | orchestrator | 2025-05-25 04:11:47 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:47.768126 | orchestrator | 2025-05-25 04:11:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:50.826632 | orchestrator | 2025-05-25 04:11:50.826720 | orchestrator | 2025-05-25 04:11:50.826730 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:11:50.826739 | orchestrator | 2025-05-25 04:11:50.826747 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:11:50.826755 | orchestrator | Sunday 25 May 2025 04:10:02 +0000 (0:00:00.201) 0:00:00.201 ************ 2025-05-25 04:11:50.826803 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:11:50.826812 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:11:50.826820 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:11:50.826827 | orchestrator | 2025-05-25 04:11:50.826835 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:11:50.826843 | orchestrator | Sunday 25 May 2025 04:10:02 +0000 (0:00:00.293) 0:00:00.494 ************ 2025-05-25 04:11:50.826851 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-25 04:11:50.826859 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-25 04:11:50.826867 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-25 04:11:50.826874 | orchestrator | 2025-05-25 04:11:50.826882 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-25 04:11:50.826908 | orchestrator | 2025-05-25 04:11:50.826916 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-25 04:11:50.826923 | orchestrator | Sunday 25 May 2025 04:10:02 +0000 (0:00:00.629) 0:00:01.123 ************ 2025-05-25 04:11:50.826931 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:11:50.826939 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:11:50.826946 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:11:50.826953 | orchestrator | 2025-05-25 04:11:50.826961 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:11:50.826969 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:11:50.826978 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:11:50.826985 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:11:50.826993 | orchestrator | 2025-05-25 04:11:50.827000 | orchestrator | 2025-05-25 04:11:50.827007 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:11:50.827014 | orchestrator | Sunday 25 May 2025 04:10:03 +0000 (0:00:00.774) 0:00:01.898 ************ 2025-05-25 04:11:50.827022 | orchestrator | =============================================================================== 2025-05-25 04:11:50.827040 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.77s 2025-05-25 04:11:50.827048 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-05-25 04:11:50.827055 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-05-25 04:11:50.827062 | orchestrator | 2025-05-25 04:11:50.827069 | orchestrator | 2025-05-25 04:11:50.827076 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:11:50.827083 | orchestrator | 2025-05-25 04:11:50.827091 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:11:50.827098 | orchestrator | Sunday 25 May 2025 04:09:54 +0000 (0:00:00.255) 0:00:00.255 ************ 2025-05-25 04:11:50.827105 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:11:50.827112 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:11:50.827119 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:11:50.827127 | orchestrator | 2025-05-25 04:11:50.827134 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:11:50.827141 | orchestrator | Sunday 25 May 2025 04:09:54 +0000 (0:00:00.281) 0:00:00.537 ************ 2025-05-25 04:11:50.827148 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-25 04:11:50.827157 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-25 04:11:50.827165 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-25 04:11:50.827173 | orchestrator | 2025-05-25 04:11:50.827181 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-25 04:11:50.827189 | orchestrator | 2025-05-25 04:11:50.827197 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-25 04:11:50.827205 | orchestrator | Sunday 25 May 2025 04:09:55 +0000 (0:00:00.448) 0:00:00.986 ************ 2025-05-25 04:11:50.827214 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:11:50.827222 | orchestrator | 2025-05-25 04:11:50.827230 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-25 04:11:50.827238 | orchestrator | Sunday 25 May 2025 04:09:56 +0000 (0:00:00.559) 0:00:01.545 ************ 2025-05-25 04:11:50.827246 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-25 04:11:50.827254 | orchestrator | 2025-05-25 04:11:50.827262 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-25 04:11:50.827270 | orchestrator | Sunday 25 May 2025 04:09:59 +0000 (0:00:03.332) 0:00:04.878 ************ 2025-05-25 04:11:50.827278 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-25 04:11:50.827293 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-25 04:11:50.827302 | orchestrator | 2025-05-25 04:11:50.827311 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-25 04:11:50.827319 | orchestrator | Sunday 25 May 2025 04:10:05 +0000 (0:00:06.193) 0:00:11.071 ************ 2025-05-25 04:11:50.827327 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-25 04:11:50.827335 | orchestrator | 2025-05-25 04:11:50.827343 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-25 04:11:50.827352 | orchestrator | Sunday 25 May 2025 04:10:08 +0000 (0:00:02.943) 0:00:14.015 ************ 2025-05-25 04:11:50.827373 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-25 04:11:50.827382 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-25 04:11:50.827390 | orchestrator | 2025-05-25 04:11:50.827399 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-25 04:11:50.827407 | orchestrator | Sunday 25 May 2025 04:10:12 +0000 (0:00:03.789) 0:00:17.804 ************ 2025-05-25 04:11:50.827415 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-25 04:11:50.827423 | orchestrator | 2025-05-25 04:11:50.827432 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-25 04:11:50.827439 | orchestrator | Sunday 25 May 2025 04:10:15 +0000 (0:00:03.084) 0:00:20.888 ************ 2025-05-25 04:11:50.827446 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-25 04:11:50.827453 | orchestrator | 2025-05-25 04:11:50.827460 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-25 04:11:50.827467 | orchestrator | Sunday 25 May 2025 04:10:19 +0000 (0:00:03.936) 0:00:24.825 ************ 2025-05-25 04:11:50.827475 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:11:50.827482 | orchestrator | 2025-05-25 04:11:50.827489 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-25 04:11:50.827496 | orchestrator | Sunday 25 May 2025 04:10:22 +0000 (0:00:03.025) 0:00:27.851 ************ 2025-05-25 04:11:50.827503 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:11:50.827511 | orchestrator | 2025-05-25 04:11:50.827535 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-25 04:11:50.827543 | orchestrator | Sunday 25 May 2025 04:10:26 +0000 (0:00:03.849) 0:00:31.700 ************ 2025-05-25 04:11:50.827550 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:11:50.827557 | orchestrator | 2025-05-25 04:11:50.827565 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-25 04:11:50.827572 | orchestrator | Sunday 25 May 2025 04:10:29 +0000 (0:00:03.680) 0:00:35.381 ************ 2025-05-25 04:11:50.827587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.827598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.827611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.827626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.827635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.827646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.827654 | orchestrator | 2025-05-25 04:11:50.827662 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-25 04:11:50.827670 | orchestrator | Sunday 25 May 2025 04:10:31 +0000 (0:00:01.715) 0:00:37.096 ************ 2025-05-25 04:11:50.827681 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:11:50.827689 | orchestrator | 2025-05-25 04:11:50.827696 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-25 04:11:50.827703 | orchestrator | Sunday 25 May 2025 04:10:31 +0000 (0:00:00.108) 0:00:37.204 ************ 2025-05-25 04:11:50.827710 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:11:50.827718 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:11:50.827725 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:11:50.827732 | orchestrator | 2025-05-25 04:11:50.827739 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-25 04:11:50.827746 | orchestrator | Sunday 25 May 2025 04:10:32 +0000 (0:00:00.359) 0:00:37.563 ************ 2025-05-25 04:11:50.827754 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 04:11:50.827761 | orchestrator | 2025-05-25 04:11:50.827768 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-25 04:11:50.827775 | orchestrator | Sunday 25 May 2025 04:10:32 +0000 (0:00:00.767) 0:00:38.331 ************ 2025-05-25 04:11:50.827783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.827798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.827806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.827818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.827830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.827838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.827846 | orchestrator | 2025-05-25 04:11:50.827853 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-25 04:11:50.827860 | orchestrator | Sunday 25 May 2025 04:10:35 +0000 (0:00:02.402) 0:00:40.733 ************ 2025-05-25 04:11:50.827868 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:11:50.827875 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:11:50.827882 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:11:50.827889 | orchestrator | 2025-05-25 04:11:50.827896 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-25 04:11:50.827907 | orchestrator | Sunday 25 May 2025 04:10:35 +0000 (0:00:00.238) 0:00:40.971 ************ 2025-05-25 04:11:50.827916 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:11:50.827923 | orchestrator | 2025-05-25 04:11:50.827931 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-25 04:11:50.827938 | orchestrator | Sunday 25 May 2025 04:10:36 +0000 (0:00:00.644) 0:00:41.616 ************ 2025-05-25 04:11:50.827945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.827958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.828002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.828011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.828024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.828032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.828045 | orchestrator | 2025-05-25 04:11:50.828052 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-25 04:11:50.828060 | orchestrator | Sunday 25 May 2025 04:10:38 +0000 (0:00:02.349) 0:00:43.966 ************ 2025-05-25 04:11:50.828071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-25 04:11:50.828079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:11:50.828086 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:11:50.828094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-25 04:11:50.828107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:11:50.828114 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:11:50.828126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-25 04:11:50.828150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:11:50.828163 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:11:50.828174 | orchestrator | 2025-05-25 04:11:50.828185 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-25 04:11:50.828196 | orchestrator | Sunday 25 May 2025 04:10:39 +0000 (0:00:00.599) 0:00:44.566 ************ 2025-05-25 04:11:50.828208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-25 04:11:50.828220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:11:50.828232 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:11:50.828250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-25 04:11:50.828270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:11:50.828282 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:11:50.828298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-25 04:11:50.828311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:11:50.828323 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:11:50.828335 | orchestrator | 2025-05-25 04:11:50.828347 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-25 04:11:50.828358 | orchestrator | Sunday 25 May 2025 04:10:40 +0000 (0:00:01.228) 0:00:45.794 ************ 2025-05-25 04:11:50.828376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:2025-05-25 04:11:50 | INFO  | Task e271a777-9167-4741-94de-8f1eee63744d is in state SUCCESS 2025-05-25 04:11:50.828681 | orchestrator | 9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.828714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.828729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.828737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.828745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.828759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.828772 | orchestrator | 2025-05-25 04:11:50.828779 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-25 04:11:50.828787 | orchestrator | Sunday 25 May 2025 04:10:42 +0000 (0:00:02.242) 0:00:48.037 ************ 2025-05-25 04:11:50.828794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.828806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.828813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.828821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.828834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.828847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.828855 | orchestrator | 2025-05-25 04:11:50.828862 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-25 04:11:50.828869 | orchestrator | Sunday 25 May 2025 04:10:47 +0000 (0:00:04.833) 0:00:52.870 ************ 2025-05-25 04:11:50.828880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-25 04:11:50.828888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:11:50.828896 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:11:50.828904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-25 04:11:50.828920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:11:50.828928 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:11:50.828935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-25 04:11:50.828947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:11:50.828954 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:11:50.828962 | orchestrator | 2025-05-25 04:11:50.828969 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-25 04:11:50.828976 | orchestrator | Sunday 25 May 2025 04:10:48 +0000 (0:00:00.808) 0:00:53.679 ************ 2025-05-25 04:11:50.828983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.828995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.829011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 04:11:50.829023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.829031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.829038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:11:50.829050 | orchestrator | 2025-05-25 04:11:50.829058 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-25 04:11:50.829065 | orchestrator | Sunday 25 May 2025 04:10:50 +0000 (0:00:02.079) 0:00:55.758 ************ 2025-05-25 04:11:50.829072 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:11:50.829080 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:11:50.829087 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:11:50.829094 | orchestrator | 2025-05-25 04:11:50.829101 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-25 04:11:50.829109 | orchestrator | Sunday 25 May 2025 04:10:50 +0000 (0:00:00.281) 0:00:56.040 ************ 2025-05-25 04:11:50.829116 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:11:50.829123 | orchestrator | 2025-05-25 04:11:50.829130 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-25 04:11:50.829137 | orchestrator | Sunday 25 May 2025 04:10:52 +0000 (0:00:02.001) 0:00:58.041 ************ 2025-05-25 04:11:50.829144 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:11:50.829151 | orchestrator | 2025-05-25 04:11:50.829159 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-25 04:11:50.829166 | orchestrator | Sunday 25 May 2025 04:10:54 +0000 (0:00:02.363) 0:01:00.405 ************ 2025-05-25 04:11:50.829176 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:11:50.829184 | orchestrator | 2025-05-25 04:11:50.829191 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-25 04:11:50.829198 | orchestrator | Sunday 25 May 2025 04:11:14 +0000 (0:00:19.530) 0:01:19.935 ************ 2025-05-25 04:11:50.829205 | orchestrator | 2025-05-25 04:11:50.829213 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-25 04:11:50.829220 | orchestrator | Sunday 25 May 2025 04:11:14 +0000 (0:00:00.279) 0:01:20.215 ************ 2025-05-25 04:11:50.829227 | orchestrator | 2025-05-25 04:11:50.829234 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-25 04:11:50.829241 | orchestrator | Sunday 25 May 2025 04:11:14 +0000 (0:00:00.147) 0:01:20.362 ************ 2025-05-25 04:11:50.829248 | orchestrator | 2025-05-25 04:11:50.829255 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-25 04:11:50.829262 | orchestrator | Sunday 25 May 2025 04:11:14 +0000 (0:00:00.067) 0:01:20.429 ************ 2025-05-25 04:11:50.829270 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:11:50.829279 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:11:50.829287 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:11:50.829295 | orchestrator | 2025-05-25 04:11:50.829303 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-25 04:11:50.829311 | orchestrator | Sunday 25 May 2025 04:11:38 +0000 (0:00:23.405) 0:01:43.835 ************ 2025-05-25 04:11:50.829320 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:11:50.829328 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:11:50.829336 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:11:50.829344 | orchestrator | 2025-05-25 04:11:50.829352 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:11:50.829361 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 04:11:50.829369 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 04:11:50.829378 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 04:11:50.829386 | orchestrator | 2025-05-25 04:11:50.829394 | orchestrator | 2025-05-25 04:11:50.829406 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:11:50.829414 | orchestrator | Sunday 25 May 2025 04:11:49 +0000 (0:00:10.873) 0:01:54.709 ************ 2025-05-25 04:11:50.829422 | orchestrator | =============================================================================== 2025-05-25 04:11:50.829436 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 23.41s 2025-05-25 04:11:50.829444 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 19.53s 2025-05-25 04:11:50.829453 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.87s 2025-05-25 04:11:50.829461 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.19s 2025-05-25 04:11:50.829469 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.83s 2025-05-25 04:11:50.829477 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.94s 2025-05-25 04:11:50.829486 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.85s 2025-05-25 04:11:50.829493 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.79s 2025-05-25 04:11:50.829500 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.68s 2025-05-25 04:11:50.829507 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.33s 2025-05-25 04:11:50.829514 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.08s 2025-05-25 04:11:50.829571 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.03s 2025-05-25 04:11:50.829579 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.94s 2025-05-25 04:11:50.829586 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.40s 2025-05-25 04:11:50.829593 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.36s 2025-05-25 04:11:50.829600 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.35s 2025-05-25 04:11:50.829607 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.24s 2025-05-25 04:11:50.829615 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.08s 2025-05-25 04:11:50.829622 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.00s 2025-05-25 04:11:50.829629 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.72s 2025-05-25 04:11:50.829636 | orchestrator | 2025-05-25 04:11:50 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:50.829731 | orchestrator | 2025-05-25 04:11:50 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:50.834785 | orchestrator | 2025-05-25 04:11:50 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:50.834979 | orchestrator | 2025-05-25 04:11:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:53.884928 | orchestrator | 2025-05-25 04:11:53 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:53.886887 | orchestrator | 2025-05-25 04:11:53 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:53.889460 | orchestrator | 2025-05-25 04:11:53 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:53.889638 | orchestrator | 2025-05-25 04:11:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:56.940109 | orchestrator | 2025-05-25 04:11:56 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:56.940331 | orchestrator | 2025-05-25 04:11:56 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state STARTED 2025-05-25 04:11:56.941969 | orchestrator | 2025-05-25 04:11:56 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:11:56.942122 | orchestrator | 2025-05-25 04:11:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:11:59.990386 | orchestrator | 2025-05-25 04:11:59 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:11:59.995291 | orchestrator | 2025-05-25 04:11:59 | INFO  | Task 6e378feb-528a-412a-95cd-f98cd2c708a8 is in state SUCCESS 2025-05-25 04:11:59.996833 | orchestrator | 2025-05-25 04:11:59.996907 | orchestrator | 2025-05-25 04:11:59.996922 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:11:59.996935 | orchestrator | 2025-05-25 04:11:59.996947 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-25 04:11:59.996958 | orchestrator | Sunday 25 May 2025 04:03:14 +0000 (0:00:00.250) 0:00:00.250 ************ 2025-05-25 04:11:59.996969 | orchestrator | changed: [testbed-manager] 2025-05-25 04:11:59.996981 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:11:59.996992 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:11:59.997003 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:11:59.997014 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:11:59.997029 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:11:59.997049 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:11:59.997067 | orchestrator | 2025-05-25 04:11:59.997087 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:11:59.997766 | orchestrator | Sunday 25 May 2025 04:03:15 +0000 (0:00:00.728) 0:00:00.979 ************ 2025-05-25 04:11:59.997784 | orchestrator | changed: [testbed-manager] 2025-05-25 04:11:59.997812 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:11:59.997824 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:11:59.997834 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:11:59.997845 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:11:59.997856 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:11:59.997866 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:11:59.997877 | orchestrator | 2025-05-25 04:11:59.997888 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:11:59.997899 | orchestrator | Sunday 25 May 2025 04:03:16 +0000 (0:00:00.609) 0:00:01.588 ************ 2025-05-25 04:11:59.997910 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-25 04:11:59.997922 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-25 04:11:59.997933 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-25 04:11:59.997944 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-25 04:11:59.997954 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-25 04:11:59.997965 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-25 04:11:59.997976 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-25 04:11:59.997987 | orchestrator | 2025-05-25 04:11:59.997998 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-25 04:11:59.998009 | orchestrator | 2025-05-25 04:11:59.998072 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-25 04:11:59.998086 | orchestrator | Sunday 25 May 2025 04:03:17 +0000 (0:00:00.793) 0:00:02.381 ************ 2025-05-25 04:11:59.998098 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:11:59.998108 | orchestrator | 2025-05-25 04:11:59.998119 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-25 04:11:59.998130 | orchestrator | Sunday 25 May 2025 04:03:17 +0000 (0:00:00.569) 0:00:02.951 ************ 2025-05-25 04:11:59.998141 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-25 04:11:59.998152 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-25 04:11:59.998163 | orchestrator | 2025-05-25 04:11:59.998179 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-25 04:11:59.998197 | orchestrator | Sunday 25 May 2025 04:03:21 +0000 (0:00:03.925) 0:00:06.876 ************ 2025-05-25 04:11:59.998216 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-25 04:11:59.999013 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-25 04:11:59.999056 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:11:59.999068 | orchestrator | 2025-05-25 04:11:59.999079 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-25 04:11:59.999110 | orchestrator | Sunday 25 May 2025 04:03:25 +0000 (0:00:04.001) 0:00:10.877 ************ 2025-05-25 04:11:59.999121 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:11:59.999132 | orchestrator | 2025-05-25 04:11:59.999143 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-25 04:11:59.999154 | orchestrator | Sunday 25 May 2025 04:03:26 +0000 (0:00:00.758) 0:00:11.635 ************ 2025-05-25 04:11:59.999164 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:11:59.999175 | orchestrator | 2025-05-25 04:11:59.999186 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-25 04:11:59.999196 | orchestrator | Sunday 25 May 2025 04:03:27 +0000 (0:00:01.441) 0:00:13.077 ************ 2025-05-25 04:11:59.999207 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:11:59.999217 | orchestrator | 2025-05-25 04:11:59.999229 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-25 04:11:59.999240 | orchestrator | Sunday 25 May 2025 04:03:31 +0000 (0:00:03.476) 0:00:16.554 ************ 2025-05-25 04:11:59.999251 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:11:59.999262 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:11:59.999272 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:11:59.999283 | orchestrator | 2025-05-25 04:11:59.999294 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-25 04:11:59.999304 | orchestrator | Sunday 25 May 2025 04:03:31 +0000 (0:00:00.485) 0:00:17.040 ************ 2025-05-25 04:11:59.999315 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:11:59.999326 | orchestrator | 2025-05-25 04:11:59.999337 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-25 04:11:59.999352 | orchestrator | Sunday 25 May 2025 04:03:58 +0000 (0:00:26.888) 0:00:43.928 ************ 2025-05-25 04:11:59.999369 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:11:59.999388 | orchestrator | 2025-05-25 04:11:59.999407 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-25 04:11:59.999427 | orchestrator | Sunday 25 May 2025 04:04:11 +0000 (0:00:12.636) 0:00:56.564 ************ 2025-05-25 04:11:59.999445 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:11:59.999463 | orchestrator | 2025-05-25 04:11:59.999481 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-25 04:11:59.999501 | orchestrator | Sunday 25 May 2025 04:04:21 +0000 (0:00:10.145) 0:01:06.709 ************ 2025-05-25 04:12:00.000156 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:12:00.000196 | orchestrator | 2025-05-25 04:12:00.000212 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-25 04:12:00.000230 | orchestrator | Sunday 25 May 2025 04:04:23 +0000 (0:00:01.990) 0:01:08.700 ************ 2025-05-25 04:12:00.000247 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.000264 | orchestrator | 2025-05-25 04:12:00.000281 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-25 04:12:00.000296 | orchestrator | Sunday 25 May 2025 04:04:24 +0000 (0:00:01.163) 0:01:09.865 ************ 2025-05-25 04:12:00.000313 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:12:00.000327 | orchestrator | 2025-05-25 04:12:00.000341 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-25 04:12:00.000356 | orchestrator | Sunday 25 May 2025 04:04:26 +0000 (0:00:01.830) 0:01:11.695 ************ 2025-05-25 04:12:00.000371 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:12:00.000386 | orchestrator | 2025-05-25 04:12:00.000415 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-25 04:12:00.000432 | orchestrator | Sunday 25 May 2025 04:04:44 +0000 (0:00:18.068) 0:01:29.763 ************ 2025-05-25 04:12:00.000448 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.000464 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.000474 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.000484 | orchestrator | 2025-05-25 04:12:00.000505 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-25 04:12:00.000549 | orchestrator | 2025-05-25 04:12:00.000565 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-25 04:12:00.000575 | orchestrator | Sunday 25 May 2025 04:04:44 +0000 (0:00:00.455) 0:01:30.219 ************ 2025-05-25 04:12:00.000589 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:12:00.000606 | orchestrator | 2025-05-25 04:12:00.000617 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-25 04:12:00.000626 | orchestrator | Sunday 25 May 2025 04:04:45 +0000 (0:00:00.590) 0:01:30.810 ************ 2025-05-25 04:12:00.000636 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.000646 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.000655 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:12:00.000665 | orchestrator | 2025-05-25 04:12:00.000674 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-25 04:12:00.000684 | orchestrator | Sunday 25 May 2025 04:04:47 +0000 (0:00:01.999) 0:01:32.810 ************ 2025-05-25 04:12:00.000694 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.000703 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.000713 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:12:00.000722 | orchestrator | 2025-05-25 04:12:00.000732 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-25 04:12:00.000742 | orchestrator | Sunday 25 May 2025 04:04:49 +0000 (0:00:02.143) 0:01:34.953 ************ 2025-05-25 04:12:00.000751 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.000761 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.000773 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.000785 | orchestrator | 2025-05-25 04:12:00.000796 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-25 04:12:00.000808 | orchestrator | Sunday 25 May 2025 04:04:49 +0000 (0:00:00.332) 0:01:35.285 ************ 2025-05-25 04:12:00.000820 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-25 04:12:00.000831 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.000842 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-25 04:12:00.000854 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.000866 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-25 04:12:00.000876 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-25 04:12:00.000885 | orchestrator | 2025-05-25 04:12:00.000895 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-25 04:12:00.000905 | orchestrator | Sunday 25 May 2025 04:04:58 +0000 (0:00:08.371) 0:01:43.657 ************ 2025-05-25 04:12:00.000915 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.000924 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.000933 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.000943 | orchestrator | 2025-05-25 04:12:00.000952 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-25 04:12:00.000962 | orchestrator | Sunday 25 May 2025 04:04:58 +0000 (0:00:00.641) 0:01:44.298 ************ 2025-05-25 04:12:00.000973 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-25 04:12:00.000990 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.001006 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-25 04:12:00.001024 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.001035 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-25 04:12:00.001045 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.001054 | orchestrator | 2025-05-25 04:12:00.001064 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-25 04:12:00.001074 | orchestrator | Sunday 25 May 2025 04:04:59 +0000 (0:00:00.946) 0:01:45.245 ************ 2025-05-25 04:12:00.001084 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.001093 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:12:00.001111 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.001121 | orchestrator | 2025-05-25 04:12:00.001131 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-25 04:12:00.001140 | orchestrator | Sunday 25 May 2025 04:05:01 +0000 (0:00:01.125) 0:01:46.371 ************ 2025-05-25 04:12:00.001150 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.001160 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.001172 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:12:00.001189 | orchestrator | 2025-05-25 04:12:00.001204 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-25 04:12:00.001219 | orchestrator | Sunday 25 May 2025 04:05:02 +0000 (0:00:01.139) 0:01:47.510 ************ 2025-05-25 04:12:00.001234 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.001249 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.001403 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:12:00.001420 | orchestrator | 2025-05-25 04:12:00.001437 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-25 04:12:00.001453 | orchestrator | Sunday 25 May 2025 04:05:04 +0000 (0:00:02.387) 0:01:49.898 ************ 2025-05-25 04:12:00.001470 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.001485 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.001502 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:12:00.001541 | orchestrator | 2025-05-25 04:12:00.001557 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-25 04:12:00.001574 | orchestrator | Sunday 25 May 2025 04:05:23 +0000 (0:00:19.010) 0:02:08.909 ************ 2025-05-25 04:12:00.001590 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.001600 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.001609 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:12:00.001619 | orchestrator | 2025-05-25 04:12:00.001629 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-25 04:12:00.001648 | orchestrator | Sunday 25 May 2025 04:05:36 +0000 (0:00:12.521) 0:02:21.430 ************ 2025-05-25 04:12:00.001658 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:12:00.001668 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.001677 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.001687 | orchestrator | 2025-05-25 04:12:00.001697 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-25 04:12:00.001706 | orchestrator | Sunday 25 May 2025 04:05:37 +0000 (0:00:01.004) 0:02:22.435 ************ 2025-05-25 04:12:00.001716 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.001725 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.001735 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:12:00.001744 | orchestrator | 2025-05-25 04:12:00.001754 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-25 04:12:00.001764 | orchestrator | Sunday 25 May 2025 04:05:47 +0000 (0:00:10.400) 0:02:32.835 ************ 2025-05-25 04:12:00.001773 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.001783 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.001792 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.001802 | orchestrator | 2025-05-25 04:12:00.001812 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-25 04:12:00.001822 | orchestrator | Sunday 25 May 2025 04:05:48 +0000 (0:00:01.431) 0:02:34.267 ************ 2025-05-25 04:12:00.001831 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.001841 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.001850 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.001860 | orchestrator | 2025-05-25 04:12:00.001869 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-25 04:12:00.001905 | orchestrator | 2025-05-25 04:12:00.001916 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-25 04:12:00.001933 | orchestrator | Sunday 25 May 2025 04:05:49 +0000 (0:00:00.325) 0:02:34.592 ************ 2025-05-25 04:12:00.001949 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:12:00.001974 | orchestrator | 2025-05-25 04:12:00.001984 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-25 04:12:00.001994 | orchestrator | Sunday 25 May 2025 04:05:49 +0000 (0:00:00.485) 0:02:35.077 ************ 2025-05-25 04:12:00.002006 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-25 04:12:00.002051 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-25 04:12:00.002066 | orchestrator | 2025-05-25 04:12:00.002078 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-25 04:12:00.002090 | orchestrator | Sunday 25 May 2025 04:05:52 +0000 (0:00:03.011) 0:02:38.089 ************ 2025-05-25 04:12:00.002101 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-25 04:12:00.002115 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-25 04:12:00.002124 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-25 04:12:00.002135 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-25 04:12:00.002144 | orchestrator | 2025-05-25 04:12:00.002154 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-25 04:12:00.002164 | orchestrator | Sunday 25 May 2025 04:05:58 +0000 (0:00:06.143) 0:02:44.233 ************ 2025-05-25 04:12:00.002173 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-25 04:12:00.002183 | orchestrator | 2025-05-25 04:12:00.002193 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-25 04:12:00.002202 | orchestrator | Sunday 25 May 2025 04:06:01 +0000 (0:00:02.992) 0:02:47.225 ************ 2025-05-25 04:12:00.002212 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-25 04:12:00.002221 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-25 04:12:00.002231 | orchestrator | 2025-05-25 04:12:00.002241 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-25 04:12:00.002250 | orchestrator | Sunday 25 May 2025 04:06:05 +0000 (0:00:03.660) 0:02:50.886 ************ 2025-05-25 04:12:00.002260 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-25 04:12:00.002270 | orchestrator | 2025-05-25 04:12:00.002279 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-25 04:12:00.002289 | orchestrator | Sunday 25 May 2025 04:06:08 +0000 (0:00:03.008) 0:02:53.894 ************ 2025-05-25 04:12:00.002299 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-25 04:12:00.002308 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-25 04:12:00.002318 | orchestrator | 2025-05-25 04:12:00.002327 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-25 04:12:00.002435 | orchestrator | Sunday 25 May 2025 04:06:15 +0000 (0:00:06.997) 0:03:00.892 ************ 2025-05-25 04:12:00.002460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 04:12:00.002492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.002581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 04:12:00.002649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 04:12:00.002680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.002696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.002715 | orchestrator | 2025-05-25 04:12:00.002726 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-25 04:12:00.002736 | orchestrator | Sunday 25 May 2025 04:06:17 +0000 (0:00:02.029) 0:03:02.922 ************ 2025-05-25 04:12:00.002745 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.002755 | orchestrator | 2025-05-25 04:12:00.002765 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-25 04:12:00.002775 | orchestrator | Sunday 25 May 2025 04:06:17 +0000 (0:00:00.150) 0:03:03.072 ************ 2025-05-25 04:12:00.002784 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.002794 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.002804 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.002813 | orchestrator | 2025-05-25 04:12:00.002823 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-25 04:12:00.002832 | orchestrator | Sunday 25 May 2025 04:06:18 +0000 (0:00:00.509) 0:03:03.582 ************ 2025-05-25 04:12:00.002842 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 04:12:00.002851 | orchestrator | 2025-05-25 04:12:00.002861 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-25 04:12:00.002871 | orchestrator | Sunday 25 May 2025 04:06:18 +0000 (0:00:00.645) 0:03:04.228 ************ 2025-05-25 04:12:00.002880 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.002890 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.002899 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.002909 | orchestrator | 2025-05-25 04:12:00.002918 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-25 04:12:00.002928 | orchestrator | Sunday 25 May 2025 04:06:19 +0000 (0:00:00.322) 0:03:04.550 ************ 2025-05-25 04:12:00.002938 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:12:00.002947 | orchestrator | 2025-05-25 04:12:00.002957 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-25 04:12:00.002967 | orchestrator | Sunday 25 May 2025 04:06:20 +0000 (0:00:01.557) 0:03:06.108 ************ 2025-05-25 04:12:00.002977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 04:12:00.003026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 04:12:00.003047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 04:12:00.003061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.003074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.003108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.003143 | orchestrator | 2025-05-25 04:12:00.003153 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-25 04:12:00.003163 | orchestrator | Sunday 25 May 2025 04:06:23 +0000 (0:00:02.954) 0:03:09.063 ************ 2025-05-25 04:12:00.003177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-25 04:12:00.003188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.003198 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.003209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-25 04:12:00.003219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.003264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-25 04:12:00.003276 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.003286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.003299 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.003313 | orchestrator | 2025-05-25 04:12:00.003329 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-25 04:12:00.003340 | orchestrator | Sunday 25 May 2025 04:06:24 +0000 (0:00:00.797) 0:03:09.860 ************ 2025-05-25 04:12:00.003351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-25 04:12:00.003361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.003379 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.003420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-25 04:12:00.003431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.003439 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.003448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-25 04:12:00.003457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.003465 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.003480 | orchestrator | 2025-05-25 04:12:00.003488 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-25 04:12:00.003496 | orchestrator | Sunday 25 May 2025 04:06:25 +0000 (0:00:00.766) 0:03:10.627 ************ 2025-05-25 04:12:00.003559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 04:12:00.003579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 04:12:00.003589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 04:12:00.003598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.003651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.003663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.003671 | orchestrator | 2025-05-25 04:12:00.003684 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-25 04:12:00.003692 | orchestrator | Sunday 25 May 2025 04:06:27 +0000 (0:00:02.376) 0:03:13.003 ************ 2025-05-25 04:12:00.003701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 04:12:00.003710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 04:12:00.003747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 04:12:00.003761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.003770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.003778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.003786 | orchestrator | 2025-05-25 04:12:00.003794 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-25 04:12:00.003802 | orchestrator | Sunday 25 May 2025 04:06:35 +0000 (0:00:07.644) 0:03:20.647 ************ 2025-05-25 04:12:00.003811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-25 04:12:00.003846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.003855 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.003868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-25 04:12:00.003877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.003885 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.003894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-25 04:12:00.003907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.003916 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.003924 | orchestrator | 2025-05-25 04:12:00.003932 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-25 04:12:00.003940 | orchestrator | Sunday 25 May 2025 04:06:35 +0000 (0:00:00.597) 0:03:21.245 ************ 2025-05-25 04:12:00.003948 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:12:00.003956 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:12:00.003964 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:12:00.003972 | orchestrator | 2025-05-25 04:12:00.004000 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-25 04:12:00.004010 | orchestrator | Sunday 25 May 2025 04:06:37 +0000 (0:00:01.885) 0:03:23.130 ************ 2025-05-25 04:12:00.004018 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.004026 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.004034 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.004042 | orchestrator | 2025-05-25 04:12:00.004050 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-25 04:12:00.004058 | orchestrator | Sunday 25 May 2025 04:06:38 +0000 (0:00:00.551) 0:03:23.682 ************ 2025-05-25 04:12:00.004073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 04:12:00.004083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 04:12:00.004119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 04:12:00.004130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.004142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.004150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.004159 | orchestrator | 2025-05-25 04:12:00.004170 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-25 04:12:00.004178 | orchestrator | Sunday 25 May 2025 04:06:40 +0000 (0:00:01.794) 0:03:25.476 ************ 2025-05-25 04:12:00.004186 | orchestrator | 2025-05-25 04:12:00.004194 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-25 04:12:00.004202 | orchestrator | Sunday 25 May 2025 04:06:40 +0000 (0:00:00.290) 0:03:25.766 ************ 2025-05-25 04:12:00.004210 | orchestrator | 2025-05-25 04:12:00.004217 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-25 04:12:00.004225 | orchestrator | Sunday 25 May 2025 04:06:40 +0000 (0:00:00.259) 0:03:26.026 ************ 2025-05-25 04:12:00.004233 | orchestrator | 2025-05-25 04:12:00.004241 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-25 04:12:00.004249 | orchestrator | Sunday 25 May 2025 04:06:41 +0000 (0:00:00.444) 0:03:26.470 ************ 2025-05-25 04:12:00.004257 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:12:00.004264 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:12:00.004272 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:12:00.004280 | orchestrator | 2025-05-25 04:12:00.004288 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-25 04:12:00.004296 | orchestrator | Sunday 25 May 2025 04:07:02 +0000 (0:00:21.618) 0:03:48.088 ************ 2025-05-25 04:12:00.004303 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:12:00.004311 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:12:00.004319 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:12:00.004327 | orchestrator | 2025-05-25 04:12:00.004335 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-25 04:12:00.004342 | orchestrator | 2025-05-25 04:12:00.004350 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-25 04:12:00.004358 | orchestrator | Sunday 25 May 2025 04:07:09 +0000 (0:00:06.382) 0:03:54.471 ************ 2025-05-25 04:12:00.004366 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:12:00.004375 | orchestrator | 2025-05-25 04:12:00.004383 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-25 04:12:00.004390 | orchestrator | Sunday 25 May 2025 04:07:10 +0000 (0:00:01.646) 0:03:56.118 ************ 2025-05-25 04:12:00.004398 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.004406 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.004414 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.004422 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.004429 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.004437 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.004445 | orchestrator | 2025-05-25 04:12:00.004452 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-25 04:12:00.004464 | orchestrator | Sunday 25 May 2025 04:07:12 +0000 (0:00:01.360) 0:03:57.478 ************ 2025-05-25 04:12:00.004479 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.004491 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.004499 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.004507 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 04:12:00.004532 | orchestrator | 2025-05-25 04:12:00.004540 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-25 04:12:00.004574 | orchestrator | Sunday 25 May 2025 04:07:13 +0000 (0:00:01.016) 0:03:58.495 ************ 2025-05-25 04:12:00.004584 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-25 04:12:00.004592 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-25 04:12:00.004600 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-25 04:12:00.004608 | orchestrator | 2025-05-25 04:12:00.004616 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-25 04:12:00.004624 | orchestrator | Sunday 25 May 2025 04:07:14 +0000 (0:00:01.008) 0:03:59.503 ************ 2025-05-25 04:12:00.004638 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-25 04:12:00.004646 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-25 04:12:00.004654 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-25 04:12:00.004662 | orchestrator | 2025-05-25 04:12:00.004669 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-25 04:12:00.004677 | orchestrator | Sunday 25 May 2025 04:07:15 +0000 (0:00:01.573) 0:04:01.076 ************ 2025-05-25 04:12:00.004689 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-25 04:12:00.004698 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.004706 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-25 04:12:00.004713 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.004721 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-25 04:12:00.004729 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.004737 | orchestrator | 2025-05-25 04:12:00.004745 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-25 04:12:00.004753 | orchestrator | Sunday 25 May 2025 04:07:16 +0000 (0:00:00.908) 0:04:01.985 ************ 2025-05-25 04:12:00.004761 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-25 04:12:00.004768 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-25 04:12:00.004776 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-25 04:12:00.004784 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-25 04:12:00.004792 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.004800 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-25 04:12:00.004808 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-25 04:12:00.004816 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-25 04:12:00.004824 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.004832 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-25 04:12:00.004840 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-25 04:12:00.004848 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.004856 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-25 04:12:00.004864 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-25 04:12:00.004871 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-25 04:12:00.004879 | orchestrator | 2025-05-25 04:12:00.004887 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-25 04:12:00.004895 | orchestrator | Sunday 25 May 2025 04:07:18 +0000 (0:00:01.410) 0:04:03.396 ************ 2025-05-25 04:12:00.004903 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.004911 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.004918 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.004926 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:12:00.004934 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:12:00.004942 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:12:00.004949 | orchestrator | 2025-05-25 04:12:00.004957 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-25 04:12:00.004965 | orchestrator | Sunday 25 May 2025 04:07:19 +0000 (0:00:01.916) 0:04:05.313 ************ 2025-05-25 04:12:00.004973 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.004981 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.004989 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.004997 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:12:00.005005 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:12:00.005017 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:12:00.005025 | orchestrator | 2025-05-25 04:12:00.005032 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-25 04:12:00.005040 | orchestrator | Sunday 25 May 2025 04:07:22 +0000 (0:00:02.178) 0:04:07.491 ************ 2025-05-25 04:12:00.005050 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005086 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005097 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005115 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005124 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005159 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005191 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005253 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005276 | orchestrator | 2025-05-25 04:12:00.005284 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-25 04:12:00.005292 | orchestrator | Sunday 25 May 2025 04:07:25 +0000 (0:00:03.290) 0:04:10.781 ************ 2025-05-25 04:12:00.005300 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:12:00.005308 | orchestrator | 2025-05-25 04:12:00.005316 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-25 04:12:00.005324 | orchestrator | Sunday 25 May 2025 04:07:26 +0000 (0:00:01.412) 0:04:12.194 ************ 2025-05-25 04:12:00.005332 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005377 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005416 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005438 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005468 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005501 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005578 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.005592 | orchestrator | 2025-05-25 04:12:00.005606 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-25 04:12:00.005615 | orchestrator | Sunday 25 May 2025 04:07:31 +0000 (0:00:04.557) 0:04:16.751 ************ 2025-05-25 04:12:00.005651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-25 04:12:00.005665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-25 04:12:00.005674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.005691 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.005699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-25 04:12:00.005708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-25 04:12:00.005737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.005746 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.005759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-25 04:12:00.005767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-25 04:12:00.005781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.005789 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.005798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-25 04:12:00.005806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.005814 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.005844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-25 04:12:00.005857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.005866 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.005874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-25 04:12:00.005888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.005897 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.005905 | orchestrator | 2025-05-25 04:12:00.005913 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-25 04:12:00.005921 | orchestrator | Sunday 25 May 2025 04:07:33 +0000 (0:00:01.993) 0:04:18.745 ************ 2025-05-25 04:12:00.005929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-25 04:12:00.005938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-25 04:12:00.005967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-25 04:12:00.005981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-25 04:12:00.005995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.006003 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.006012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.006066 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.006081 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-25 04:12:00.006123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-25 04:12:00.006142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.006162 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.006170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-25 04:12:00.006177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.006184 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.006191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-25 04:12:00.006198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.006205 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.006212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-25 04:12:00.006249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.006265 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.006272 | orchestrator | 2025-05-25 04:12:00.006279 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-25 04:12:00.006286 | orchestrator | Sunday 25 May 2025 04:07:37 +0000 (0:00:04.158) 0:04:22.903 ************ 2025-05-25 04:12:00.006297 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.006304 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.006310 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.006317 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 04:12:00.006324 | orchestrator | 2025-05-25 04:12:00.006331 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-25 04:12:00.006338 | orchestrator | Sunday 25 May 2025 04:07:38 +0000 (0:00:00.976) 0:04:23.880 ************ 2025-05-25 04:12:00.006344 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-25 04:12:00.006351 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-25 04:12:00.006358 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-25 04:12:00.006364 | orchestrator | 2025-05-25 04:12:00.006371 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-25 04:12:00.006378 | orchestrator | Sunday 25 May 2025 04:07:40 +0000 (0:00:01.867) 0:04:25.747 ************ 2025-05-25 04:12:00.006384 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-25 04:12:00.006391 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-25 04:12:00.006398 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-25 04:12:00.006404 | orchestrator | 2025-05-25 04:12:00.006411 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-25 04:12:00.006418 | orchestrator | Sunday 25 May 2025 04:07:41 +0000 (0:00:01.290) 0:04:27.038 ************ 2025-05-25 04:12:00.006424 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:12:00.006432 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:12:00.006438 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:12:00.006445 | orchestrator | 2025-05-25 04:12:00.006452 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-25 04:12:00.006458 | orchestrator | Sunday 25 May 2025 04:07:42 +0000 (0:00:01.071) 0:04:28.110 ************ 2025-05-25 04:12:00.006465 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:12:00.006472 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:12:00.006478 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:12:00.006485 | orchestrator | 2025-05-25 04:12:00.006492 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-25 04:12:00.006498 | orchestrator | Sunday 25 May 2025 04:07:43 +0000 (0:00:00.698) 0:04:28.808 ************ 2025-05-25 04:12:00.006505 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-25 04:12:00.006529 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-25 04:12:00.006536 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-25 04:12:00.006543 | orchestrator | 2025-05-25 04:12:00.006550 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-25 04:12:00.006557 | orchestrator | Sunday 25 May 2025 04:07:45 +0000 (0:00:01.666) 0:04:30.475 ************ 2025-05-25 04:12:00.006563 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-25 04:12:00.006570 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-25 04:12:00.006577 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-25 04:12:00.006583 | orchestrator | 2025-05-25 04:12:00.006590 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-25 04:12:00.006596 | orchestrator | Sunday 25 May 2025 04:07:46 +0000 (0:00:01.218) 0:04:31.694 ************ 2025-05-25 04:12:00.006603 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-25 04:12:00.006610 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-25 04:12:00.006616 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-25 04:12:00.006623 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-25 04:12:00.006635 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-25 04:12:00.006641 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-25 04:12:00.006648 | orchestrator | 2025-05-25 04:12:00.006655 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-25 04:12:00.006661 | orchestrator | Sunday 25 May 2025 04:07:52 +0000 (0:00:06.223) 0:04:37.918 ************ 2025-05-25 04:12:00.006668 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.006675 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.006681 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.006688 | orchestrator | 2025-05-25 04:12:00.006694 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-25 04:12:00.006701 | orchestrator | Sunday 25 May 2025 04:07:53 +0000 (0:00:00.705) 0:04:38.623 ************ 2025-05-25 04:12:00.006707 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.006714 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.006720 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.006727 | orchestrator | 2025-05-25 04:12:00.006734 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-25 04:12:00.006741 | orchestrator | Sunday 25 May 2025 04:07:53 +0000 (0:00:00.337) 0:04:38.961 ************ 2025-05-25 04:12:00.006747 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:12:00.006754 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:12:00.006761 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:12:00.006767 | orchestrator | 2025-05-25 04:12:00.006796 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-25 04:12:00.006805 | orchestrator | Sunday 25 May 2025 04:07:55 +0000 (0:00:01.707) 0:04:40.668 ************ 2025-05-25 04:12:00.006812 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-25 04:12:00.006819 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-25 04:12:00.006826 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-25 04:12:00.006839 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-25 04:12:00.006846 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-25 04:12:00.006853 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-25 04:12:00.006860 | orchestrator | 2025-05-25 04:12:00.006867 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-25 04:12:00.006873 | orchestrator | Sunday 25 May 2025 04:07:59 +0000 (0:00:04.302) 0:04:44.970 ************ 2025-05-25 04:12:00.006880 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-25 04:12:00.006887 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-25 04:12:00.006893 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-25 04:12:00.006900 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-25 04:12:00.006906 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:12:00.006913 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-25 04:12:00.006920 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:12:00.006926 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-25 04:12:00.006933 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:12:00.006939 | orchestrator | 2025-05-25 04:12:00.006946 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-25 04:12:00.006952 | orchestrator | Sunday 25 May 2025 04:08:04 +0000 (0:00:04.843) 0:04:49.814 ************ 2025-05-25 04:12:00.006959 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.006971 | orchestrator | 2025-05-25 04:12:00.006978 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-25 04:12:00.006984 | orchestrator | Sunday 25 May 2025 04:08:04 +0000 (0:00:00.117) 0:04:49.931 ************ 2025-05-25 04:12:00.006991 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.006998 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.007004 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.007011 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.007017 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.007024 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.007030 | orchestrator | 2025-05-25 04:12:00.007037 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-25 04:12:00.007044 | orchestrator | Sunday 25 May 2025 04:08:05 +0000 (0:00:00.616) 0:04:50.548 ************ 2025-05-25 04:12:00.007050 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-25 04:12:00.007057 | orchestrator | 2025-05-25 04:12:00.007064 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-25 04:12:00.007071 | orchestrator | Sunday 25 May 2025 04:08:05 +0000 (0:00:00.647) 0:04:51.195 ************ 2025-05-25 04:12:00.007077 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.007084 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.007091 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.007097 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.007106 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.007117 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.007129 | orchestrator | 2025-05-25 04:12:00.007136 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-25 04:12:00.007142 | orchestrator | Sunday 25 May 2025 04:08:06 +0000 (0:00:00.551) 0:04:51.747 ************ 2025-05-25 04:12:00.007149 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007163 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007220 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007242 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007263 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007274 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007285 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007296 | orchestrator | 2025-05-25 04:12:00.007303 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-25 04:12:00.007310 | orchestrator | Sunday 25 May 2025 04:08:10 +0000 (0:00:04.155) 0:04:55.903 ************ 2025-05-25 04:12:00.007317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-25 04:12:00.007324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-25 04:12:00.007331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-25 04:12:00.007339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-25 04:12:00.007353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-25 04:12:00.007365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-25 04:12:00.007373 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007380 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007398 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.007447 | orchestrator | 2025-05-25 04:12:00.007454 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-25 04:12:00.007461 | orchestrator | Sunday 25 May 2025 04:08:17 +0000 (0:00:06.583) 0:05:02.487 ************ 2025-05-25 04:12:00.007468 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.007475 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.007481 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.007488 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.007495 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.007501 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.007508 | orchestrator | 2025-05-25 04:12:00.007556 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-25 04:12:00.007564 | orchestrator | Sunday 25 May 2025 04:08:18 +0000 (0:00:01.409) 0:05:03.896 ************ 2025-05-25 04:12:00.007571 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-25 04:12:00.007584 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-25 04:12:00.007591 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-25 04:12:00.007598 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-25 04:12:00.007610 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-25 04:12:00.007618 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.007625 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-25 04:12:00.007632 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-25 04:12:00.007639 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.007646 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-25 04:12:00.007653 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-25 04:12:00.007660 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.007667 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-25 04:12:00.007678 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-25 04:12:00.007685 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-25 04:12:00.007692 | orchestrator | 2025-05-25 04:12:00.007699 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-25 04:12:00.007706 | orchestrator | Sunday 25 May 2025 04:08:22 +0000 (0:00:03.637) 0:05:07.533 ************ 2025-05-25 04:12:00.007713 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.007720 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.007727 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.007734 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.007741 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.007748 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.007755 | orchestrator | 2025-05-25 04:12:00.007762 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-25 04:12:00.007770 | orchestrator | Sunday 25 May 2025 04:08:22 +0000 (0:00:00.626) 0:05:08.159 ************ 2025-05-25 04:12:00.007777 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-25 04:12:00.007785 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-25 04:12:00.007792 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-25 04:12:00.007799 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-25 04:12:00.007806 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-25 04:12:00.007813 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-25 04:12:00.007820 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-25 04:12:00.007828 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-25 04:12:00.007834 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-25 04:12:00.007841 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.007848 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-25 04:12:00.007854 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-25 04:12:00.007868 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.007875 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-25 04:12:00.007881 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.007888 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-25 04:12:00.007895 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-25 04:12:00.007901 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-25 04:12:00.007908 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-25 04:12:00.007914 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-25 04:12:00.007921 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-25 04:12:00.007928 | orchestrator | 2025-05-25 04:12:00.007934 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-25 04:12:00.007941 | orchestrator | Sunday 25 May 2025 04:08:27 +0000 (0:00:04.680) 0:05:12.840 ************ 2025-05-25 04:12:00.007950 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-25 04:12:00.007961 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-25 04:12:00.007974 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-25 04:12:00.007983 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-25 04:12:00.007992 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-25 04:12:00.008001 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-25 04:12:00.008010 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-25 04:12:00.008021 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-25 04:12:00.008032 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-25 04:12:00.008049 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-25 04:12:00.008060 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-25 04:12:00.008069 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-25 04:12:00.008075 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-25 04:12:00.008081 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.008087 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-25 04:12:00.008093 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.008099 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-25 04:12:00.008105 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-25 04:12:00.008112 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-25 04:12:00.008118 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-25 04:12:00.008124 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.008131 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-25 04:12:00.008137 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-25 04:12:00.008148 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-25 04:12:00.008154 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-25 04:12:00.008160 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-25 04:12:00.008167 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-25 04:12:00.008173 | orchestrator | 2025-05-25 04:12:00.008179 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-25 04:12:00.008185 | orchestrator | Sunday 25 May 2025 04:08:34 +0000 (0:00:07.202) 0:05:20.042 ************ 2025-05-25 04:12:00.008191 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.008197 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.008203 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.008210 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.008216 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.008222 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.008228 | orchestrator | 2025-05-25 04:12:00.008238 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-25 04:12:00.008248 | orchestrator | Sunday 25 May 2025 04:08:35 +0000 (0:00:00.470) 0:05:20.513 ************ 2025-05-25 04:12:00.008258 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.008268 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.008278 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.008288 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.008297 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.008306 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.008316 | orchestrator | 2025-05-25 04:12:00.008325 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-25 04:12:00.008334 | orchestrator | Sunday 25 May 2025 04:08:35 +0000 (0:00:00.724) 0:05:21.238 ************ 2025-05-25 04:12:00.008345 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.008354 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.008364 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:12:00.008374 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.008384 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:12:00.008394 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:12:00.008403 | orchestrator | 2025-05-25 04:12:00.008413 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-25 04:12:00.008422 | orchestrator | Sunday 25 May 2025 04:08:37 +0000 (0:00:01.867) 0:05:23.105 ************ 2025-05-25 04:12:00.008441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-25 04:12:00.008460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-25 04:12:00.008491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.008498 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.008505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-25 04:12:00.008528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-25 04:12:00.008536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.008542 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.008553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-25 04:12:00.008568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-25 04:12:00.008575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.008581 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.008588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-25 04:12:00.008594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.008601 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.008607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-25 04:12:00.008618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.008629 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.008639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-25 04:12:00.008646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 04:12:00.008653 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.008659 | orchestrator | 2025-05-25 04:12:00.008666 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-25 04:12:00.008672 | orchestrator | Sunday 25 May 2025 04:08:39 +0000 (0:00:01.809) 0:05:24.914 ************ 2025-05-25 04:12:00.008678 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-25 04:12:00.008685 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-25 04:12:00.008691 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.008697 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-25 04:12:00.008704 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-25 04:12:00.008710 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.008716 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-25 04:12:00.008722 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-25 04:12:00.008729 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.008735 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-25 04:12:00.008741 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-25 04:12:00.008747 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.008754 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-25 04:12:00.008760 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-25 04:12:00.008766 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.008772 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-25 04:12:00.008778 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-25 04:12:00.008784 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.008791 | orchestrator | 2025-05-25 04:12:00.008797 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-25 04:12:00.008803 | orchestrator | Sunday 25 May 2025 04:08:40 +0000 (0:00:00.595) 0:05:25.509 ************ 2025-05-25 04:12:00.008809 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-25 04:12:00.008824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-25 04:12:00.008835 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-25 04:12:00.008843 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-25 04:12:00.008855 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-25 04:12:00.008865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-25 04:12:00.008875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-25 04:12:00.008897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-25 04:12:00.008911 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.008925 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-25 04:12:00.008935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.008945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.008956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.008979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.008994 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-25 04:12:00.009005 | orchestrator | 2025-05-25 04:12:00.009015 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-25 04:12:00.009026 | orchestrator | Sunday 25 May 2025 04:08:43 +0000 (0:00:03.090) 0:05:28.600 ************ 2025-05-25 04:12:00.009034 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.009040 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.009046 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.009053 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.009059 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.009065 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.009071 | orchestrator | 2025-05-25 04:12:00.009077 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-25 04:12:00.009083 | orchestrator | Sunday 25 May 2025 04:08:43 +0000 (0:00:00.540) 0:05:29.140 ************ 2025-05-25 04:12:00.009090 | orchestrator | 2025-05-25 04:12:00.009096 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-25 04:12:00.009102 | orchestrator | Sunday 25 May 2025 04:08:44 +0000 (0:00:00.303) 0:05:29.444 ************ 2025-05-25 04:12:00.009108 | orchestrator | 2025-05-25 04:12:00.009116 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-25 04:12:00.009126 | orchestrator | Sunday 25 May 2025 04:08:44 +0000 (0:00:00.148) 0:05:29.592 ************ 2025-05-25 04:12:00.009136 | orchestrator | 2025-05-25 04:12:00.009145 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-25 04:12:00.009155 | orchestrator | Sunday 25 May 2025 04:08:44 +0000 (0:00:00.169) 0:05:29.761 ************ 2025-05-25 04:12:00.009164 | orchestrator | 2025-05-25 04:12:00.009175 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-25 04:12:00.009185 | orchestrator | Sunday 25 May 2025 04:08:44 +0000 (0:00:00.120) 0:05:29.882 ************ 2025-05-25 04:12:00.009195 | orchestrator | 2025-05-25 04:12:00.009205 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-25 04:12:00.009216 | orchestrator | Sunday 25 May 2025 04:08:44 +0000 (0:00:00.116) 0:05:29.998 ************ 2025-05-25 04:12:00.009234 | orchestrator | 2025-05-25 04:12:00.009245 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-25 04:12:00.009255 | orchestrator | Sunday 25 May 2025 04:08:44 +0000 (0:00:00.114) 0:05:30.113 ************ 2025-05-25 04:12:00.009262 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:12:00.009268 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:12:00.009274 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:12:00.009280 | orchestrator | 2025-05-25 04:12:00.009287 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-25 04:12:00.009293 | orchestrator | Sunday 25 May 2025 04:08:52 +0000 (0:00:07.322) 0:05:37.436 ************ 2025-05-25 04:12:00.009299 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:12:00.009305 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:12:00.009311 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:12:00.009317 | orchestrator | 2025-05-25 04:12:00.009323 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-25 04:12:00.009329 | orchestrator | Sunday 25 May 2025 04:09:06 +0000 (0:00:14.797) 0:05:52.233 ************ 2025-05-25 04:12:00.009335 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:12:00.009341 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:12:00.009348 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:12:00.009354 | orchestrator | 2025-05-25 04:12:00.009360 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-25 04:12:00.009366 | orchestrator | Sunday 25 May 2025 04:09:34 +0000 (0:00:27.506) 0:06:19.739 ************ 2025-05-25 04:12:00.009373 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:12:00.009379 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:12:00.009385 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:12:00.009391 | orchestrator | 2025-05-25 04:12:00.009397 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-25 04:12:00.009403 | orchestrator | Sunday 25 May 2025 04:10:18 +0000 (0:00:44.071) 0:07:03.811 ************ 2025-05-25 04:12:00.009409 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:12:00.009415 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-05-25 04:12:00.009423 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-05-25 04:12:00.009429 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:12:00.009435 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:12:00.009441 | orchestrator | 2025-05-25 04:12:00.009447 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-25 04:12:00.009454 | orchestrator | Sunday 25 May 2025 04:10:24 +0000 (0:00:06.404) 0:07:10.216 ************ 2025-05-25 04:12:00.009465 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:12:00.009471 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:12:00.009477 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:12:00.009483 | orchestrator | 2025-05-25 04:12:00.009489 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-25 04:12:00.009496 | orchestrator | Sunday 25 May 2025 04:10:25 +0000 (0:00:00.772) 0:07:10.988 ************ 2025-05-25 04:12:00.009502 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:12:00.009508 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:12:00.009533 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:12:00.009545 | orchestrator | 2025-05-25 04:12:00.009551 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-25 04:12:00.009558 | orchestrator | Sunday 25 May 2025 04:10:51 +0000 (0:00:25.907) 0:07:36.896 ************ 2025-05-25 04:12:00.009564 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.009570 | orchestrator | 2025-05-25 04:12:00.009576 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-25 04:12:00.009590 | orchestrator | Sunday 25 May 2025 04:10:51 +0000 (0:00:00.136) 0:07:37.033 ************ 2025-05-25 04:12:00.009596 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.009607 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.009614 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.009620 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.009626 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.009632 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-25 04:12:00.009638 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-25 04:12:00.009645 | orchestrator | 2025-05-25 04:12:00.009651 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-25 04:12:00.009657 | orchestrator | Sunday 25 May 2025 04:11:13 +0000 (0:00:21.307) 0:07:58.340 ************ 2025-05-25 04:12:00.009663 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.009669 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.009675 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.009682 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.009688 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.009694 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.009700 | orchestrator | 2025-05-25 04:12:00.009707 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-25 04:12:00.009713 | orchestrator | Sunday 25 May 2025 04:11:22 +0000 (0:00:09.809) 0:08:08.150 ************ 2025-05-25 04:12:00.009719 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.009725 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.009732 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.009738 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.009744 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.009750 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-05-25 04:12:00.009756 | orchestrator | 2025-05-25 04:12:00.009762 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-25 04:12:00.009769 | orchestrator | Sunday 25 May 2025 04:11:26 +0000 (0:00:03.959) 0:08:12.110 ************ 2025-05-25 04:12:00.009775 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-25 04:12:00.009781 | orchestrator | 2025-05-25 04:12:00.009787 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-25 04:12:00.009793 | orchestrator | Sunday 25 May 2025 04:11:38 +0000 (0:00:11.805) 0:08:23.915 ************ 2025-05-25 04:12:00.009799 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-25 04:12:00.009806 | orchestrator | 2025-05-25 04:12:00.009812 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-25 04:12:00.009820 | orchestrator | Sunday 25 May 2025 04:11:40 +0000 (0:00:01.556) 0:08:25.472 ************ 2025-05-25 04:12:00.009831 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.009842 | orchestrator | 2025-05-25 04:12:00.009852 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-25 04:12:00.009861 | orchestrator | Sunday 25 May 2025 04:11:41 +0000 (0:00:01.500) 0:08:26.972 ************ 2025-05-25 04:12:00.009871 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-25 04:12:00.009880 | orchestrator | 2025-05-25 04:12:00.009891 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-25 04:12:00.009900 | orchestrator | Sunday 25 May 2025 04:11:51 +0000 (0:00:10.182) 0:08:37.154 ************ 2025-05-25 04:12:00.009911 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:12:00.009922 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:12:00.009932 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:12:00.009942 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:12:00.009952 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:12:00.009962 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:12:00.009968 | orchestrator | 2025-05-25 04:12:00.009975 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-25 04:12:00.009981 | orchestrator | 2025-05-25 04:12:00.009987 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-25 04:12:00.009999 | orchestrator | Sunday 25 May 2025 04:11:53 +0000 (0:00:01.619) 0:08:38.774 ************ 2025-05-25 04:12:00.010006 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:12:00.010012 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:12:00.010045 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:12:00.010051 | orchestrator | 2025-05-25 04:12:00.010057 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-25 04:12:00.010064 | orchestrator | 2025-05-25 04:12:00.010070 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-25 04:12:00.010076 | orchestrator | Sunday 25 May 2025 04:11:54 +0000 (0:00:01.061) 0:08:39.835 ************ 2025-05-25 04:12:00.010082 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.010088 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.010095 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.010153 | orchestrator | 2025-05-25 04:12:00.010160 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-25 04:12:00.010167 | orchestrator | 2025-05-25 04:12:00.010181 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-25 04:12:00.010187 | orchestrator | Sunday 25 May 2025 04:11:54 +0000 (0:00:00.485) 0:08:40.320 ************ 2025-05-25 04:12:00.010193 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-25 04:12:00.010200 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-25 04:12:00.010206 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-25 04:12:00.010212 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-25 04:12:00.010219 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-25 04:12:00.010225 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-25 04:12:00.010234 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:12:00.010244 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-25 04:12:00.010254 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-25 04:12:00.010271 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-25 04:12:00.010281 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-25 04:12:00.010291 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-25 04:12:00.010301 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-25 04:12:00.010311 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:12:00.010322 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-25 04:12:00.010332 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-25 04:12:00.010343 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-25 04:12:00.010354 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-25 04:12:00.010364 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-25 04:12:00.010374 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-25 04:12:00.010385 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:12:00.010395 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-25 04:12:00.010404 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-25 04:12:00.010412 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-25 04:12:00.010421 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-25 04:12:00.010431 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-25 04:12:00.010441 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-25 04:12:00.010458 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.010469 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-25 04:12:00.010480 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-25 04:12:00.010500 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-25 04:12:00.010511 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-25 04:12:00.010572 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-25 04:12:00.010582 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-25 04:12:00.010593 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.010603 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-25 04:12:00.010613 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-25 04:12:00.010623 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-25 04:12:00.010634 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-25 04:12:00.010640 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-25 04:12:00.010646 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-25 04:12:00.010653 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.010659 | orchestrator | 2025-05-25 04:12:00.010665 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-25 04:12:00.010671 | orchestrator | 2025-05-25 04:12:00.010677 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-25 04:12:00.010684 | orchestrator | Sunday 25 May 2025 04:11:56 +0000 (0:00:01.340) 0:08:41.661 ************ 2025-05-25 04:12:00.010690 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-25 04:12:00.010696 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-25 04:12:00.010702 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.010709 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-25 04:12:00.010715 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-25 04:12:00.010721 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.010727 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-25 04:12:00.010733 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-25 04:12:00.010739 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.010746 | orchestrator | 2025-05-25 04:12:00.010752 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-25 04:12:00.010758 | orchestrator | 2025-05-25 04:12:00.010764 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-25 04:12:00.010770 | orchestrator | Sunday 25 May 2025 04:11:57 +0000 (0:00:00.729) 0:08:42.390 ************ 2025-05-25 04:12:00.010776 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.010782 | orchestrator | 2025-05-25 04:12:00.010788 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-25 04:12:00.010794 | orchestrator | 2025-05-25 04:12:00.010801 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-25 04:12:00.010807 | orchestrator | Sunday 25 May 2025 04:11:57 +0000 (0:00:00.639) 0:08:43.030 ************ 2025-05-25 04:12:00.010813 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:00.010819 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:00.010825 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:00.010831 | orchestrator | 2025-05-25 04:12:00.010845 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:12:00.010851 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:12:00.010859 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-25 04:12:00.010866 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-25 04:12:00.010872 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-25 04:12:00.010890 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-25 04:12:00.010896 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-25 04:12:00.010901 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-25 04:12:00.010907 | orchestrator | 2025-05-25 04:12:00.010912 | orchestrator | 2025-05-25 04:12:00.010918 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:12:00.010923 | orchestrator | Sunday 25 May 2025 04:11:58 +0000 (0:00:00.411) 0:08:43.441 ************ 2025-05-25 04:12:00.010929 | orchestrator | =============================================================================== 2025-05-25 04:12:00.010934 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 44.07s 2025-05-25 04:12:00.010939 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 27.51s 2025-05-25 04:12:00.010945 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 26.89s 2025-05-25 04:12:00.010950 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.91s 2025-05-25 04:12:00.010955 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.62s 2025-05-25 04:12:00.010961 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.31s 2025-05-25 04:12:00.010966 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.01s 2025-05-25 04:12:00.010971 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.07s 2025-05-25 04:12:00.010977 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.80s 2025-05-25 04:12:00.010982 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 12.64s 2025-05-25 04:12:00.010987 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.52s 2025-05-25 04:12:00.010993 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.81s 2025-05-25 04:12:00.010998 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.40s 2025-05-25 04:12:00.011004 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.18s 2025-05-25 04:12:00.011009 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.14s 2025-05-25 04:12:00.011014 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.81s 2025-05-25 04:12:00.011020 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.37s 2025-05-25 04:12:00.011025 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 7.64s 2025-05-25 04:12:00.011031 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.32s 2025-05-25 04:12:00.011037 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.20s 2025-05-25 04:12:00.011045 | orchestrator | 2025-05-25 04:11:59 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:12:00.011054 | orchestrator | 2025-05-25 04:11:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:03.048319 | orchestrator | 2025-05-25 04:12:03 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:03.052164 | orchestrator | 2025-05-25 04:12:03 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state STARTED 2025-05-25 04:12:03.052247 | orchestrator | 2025-05-25 04:12:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:06.097119 | orchestrator | 2025-05-25 04:12:06 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:06.104504 | orchestrator | 2025-05-25 04:12:06.104646 | orchestrator | 2025-05-25 04:12:06.104690 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:12:06.104704 | orchestrator | 2025-05-25 04:12:06.104715 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:12:06.104726 | orchestrator | Sunday 25 May 2025 04:09:57 +0000 (0:00:00.269) 0:00:00.269 ************ 2025-05-25 04:12:06.104738 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:12:06.104750 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:12:06.104761 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:12:06.104771 | orchestrator | 2025-05-25 04:12:06.104782 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:12:06.104793 | orchestrator | Sunday 25 May 2025 04:09:57 +0000 (0:00:00.285) 0:00:00.554 ************ 2025-05-25 04:12:06.104804 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-25 04:12:06.104816 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-25 04:12:06.104826 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-25 04:12:06.104837 | orchestrator | 2025-05-25 04:12:06.104848 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-25 04:12:06.104859 | orchestrator | 2025-05-25 04:12:06.104870 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-25 04:12:06.104881 | orchestrator | Sunday 25 May 2025 04:09:57 +0000 (0:00:00.397) 0:00:00.952 ************ 2025-05-25 04:12:06.104892 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:12:06.104904 | orchestrator | 2025-05-25 04:12:06.104928 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-25 04:12:06.104939 | orchestrator | Sunday 25 May 2025 04:09:58 +0000 (0:00:00.522) 0:00:01.475 ************ 2025-05-25 04:12:06.104953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 04:12:06.104969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 04:12:06.104981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 04:12:06.105008 | orchestrator | 2025-05-25 04:12:06.105020 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-25 04:12:06.105050 | orchestrator | Sunday 25 May 2025 04:09:59 +0000 (0:00:00.710) 0:00:02.186 ************ 2025-05-25 04:12:06.105061 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-25 04:12:06.105075 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-25 04:12:06.105089 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 04:12:06.105102 | orchestrator | 2025-05-25 04:12:06.105115 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-25 04:12:06.105127 | orchestrator | Sunday 25 May 2025 04:10:00 +0000 (0:00:00.853) 0:00:03.039 ************ 2025-05-25 04:12:06.105140 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:12:06.105153 | orchestrator | 2025-05-25 04:12:06.105165 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-25 04:12:06.105177 | orchestrator | Sunday 25 May 2025 04:10:00 +0000 (0:00:00.692) 0:00:03.731 ************ 2025-05-25 04:12:06.105235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 04:12:06.105257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 04:12:06.105271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 04:12:06.105285 | orchestrator | 2025-05-25 04:12:06.105298 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-25 04:12:06.105310 | orchestrator | Sunday 25 May 2025 04:10:02 +0000 (0:00:01.323) 0:00:05.055 ************ 2025-05-25 04:12:06.105324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 04:12:06.105344 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:06.105358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 04:12:06.105371 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:06.105409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 04:12:06.105424 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:06.105437 | orchestrator | 2025-05-25 04:12:06.105448 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-25 04:12:06.105459 | orchestrator | Sunday 25 May 2025 04:10:02 +0000 (0:00:00.341) 0:00:05.397 ************ 2025-05-25 04:12:06.105470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 04:12:06.105482 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:06.105498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 04:12:06.105533 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:06.105547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 04:12:06.105558 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:06.105576 | orchestrator | 2025-05-25 04:12:06.105587 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-25 04:12:06.105598 | orchestrator | Sunday 25 May 2025 04:10:03 +0000 (0:00:00.855) 0:00:06.252 ************ 2025-05-25 04:12:06.105609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 04:12:06.105620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 04:12:06.105663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 04:12:06.105675 | orchestrator | 2025-05-25 04:12:06.105686 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-25 04:12:06.105697 | orchestrator | Sunday 25 May 2025 04:10:04 +0000 (0:00:01.210) 0:00:07.463 ************ 2025-05-25 04:12:06.105714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 04:12:06.105726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 04:12:06.105738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 04:12:06.105756 | orchestrator | 2025-05-25 04:12:06.105767 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-25 04:12:06.105778 | orchestrator | Sunday 25 May 2025 04:10:05 +0000 (0:00:01.259) 0:00:08.722 ************ 2025-05-25 04:12:06.105789 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:06.105800 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:06.105811 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:06.105821 | orchestrator | 2025-05-25 04:12:06.105832 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-25 04:12:06.105843 | orchestrator | Sunday 25 May 2025 04:10:06 +0000 (0:00:00.548) 0:00:09.271 ************ 2025-05-25 04:12:06.105854 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-25 04:12:06.105864 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-25 04:12:06.105875 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-25 04:12:06.105886 | orchestrator | 2025-05-25 04:12:06.105897 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-25 04:12:06.105907 | orchestrator | Sunday 25 May 2025 04:10:07 +0000 (0:00:01.239) 0:00:10.510 ************ 2025-05-25 04:12:06.105918 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-25 04:12:06.105929 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-25 04:12:06.105940 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-25 04:12:06.105951 | orchestrator | 2025-05-25 04:12:06.105962 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-25 04:12:06.105972 | orchestrator | Sunday 25 May 2025 04:10:08 +0000 (0:00:01.131) 0:00:11.641 ************ 2025-05-25 04:12:06.106008 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 04:12:06.106075 | orchestrator | 2025-05-25 04:12:06.106089 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-25 04:12:06.106100 | orchestrator | Sunday 25 May 2025 04:10:09 +0000 (0:00:00.714) 0:00:12.356 ************ 2025-05-25 04:12:06.106111 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-25 04:12:06.106122 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-25 04:12:06.106132 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:12:06.106143 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:12:06.106153 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:12:06.106164 | orchestrator | 2025-05-25 04:12:06.106175 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-25 04:12:06.106185 | orchestrator | Sunday 25 May 2025 04:10:10 +0000 (0:00:00.677) 0:00:13.033 ************ 2025-05-25 04:12:06.106196 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:06.106207 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:06.106217 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:06.106228 | orchestrator | 2025-05-25 04:12:06.106238 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-25 04:12:06.106249 | orchestrator | Sunday 25 May 2025 04:10:10 +0000 (0:00:00.546) 0:00:13.580 ************ 2025-05-25 04:12:06.106273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1079382, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8645303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1079382, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8645303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1079382, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8645303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1079351, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8515303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1079351, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8515303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1079351, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8515303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1079348, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8465302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1079348, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8465302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1079348, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8465302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1079366, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8615303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1079366, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8615303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1079366, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8615303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1079341, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8405302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1079341, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8405302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1079341, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8405302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1079349, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8465302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1079349, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8465302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1079349, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8465302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1079364, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8555303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1079364, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8555303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1079364, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8555303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1079340, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8395302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1079340, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8395302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1079340, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8395302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106781 | orchestrator | changed: [testbed-node-0] => (item={'key': 2025-05-25 04:12:06 | INFO  | Task 54113aef-c85d-4bae-9b37-5c6f7b29248e is in state SUCCESS 2025-05-25 04:12:06.106795 | orchestrator | 2025-05-25 04:12:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:06.106814 | orchestrator | 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1078815, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.681529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1078815, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.681529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1078815, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.681529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1079343, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8415303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1079343, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8415303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1079343, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8415303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1078821, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8385303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1078821, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8385303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1078821, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8385303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1079356, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8555303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1079356, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8555303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.106984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1079356, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8555303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1079345, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8425303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1079345, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8425303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1079345, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8425303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1079381, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8615303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1079381, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8615303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1079381, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8615303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1079339, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8395302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1079339, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8395302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1079339, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8395302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1079350, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8475304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1079350, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8475304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1079350, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8475304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1078817, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.682529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1078817, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.682529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1078817, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.682529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1079338, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8385303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1079338, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8385303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1079338, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8385303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1079347, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8435302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1079347, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8435302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1079347, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8435302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1079436, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8955307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1079436, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8955307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1079436, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8955307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1079421, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8835306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1079421, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8835306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1079421, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8835306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1079386, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8655305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1079386, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8655305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1079386, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8655305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1079475, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9015307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1079475, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9015307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1079475, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9015307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1079388, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8655305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1079388, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8655305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1079388, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8655305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1079472, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9005306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1079472, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9005306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1079472, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9005306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1079477, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9035306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1079477, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9035306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1079477, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9035306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1079456, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8975306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1079456, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8975306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1079456, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8975306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1079470, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8995306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1079470, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8995306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1079390, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8665304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1079470, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8995306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1079390, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8665304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1079423, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8855305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1079390, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8665304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1079423, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8855305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1079423, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8855305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1079481, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9035306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1079481, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9035306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1079481, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9035306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1079473, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9005306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1079473, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9005306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1079473, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9005306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1079394, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8685305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1079394, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8685305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1079393, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8665304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1079394, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8685305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1079393, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8665304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1079393, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8665304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1079398, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8745306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1079398, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8745306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1079407, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8825305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1079398, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8745306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1079407, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8825305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.107985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1079433, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8865306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.108000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1079407, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8825305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.108010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1079433, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8865306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.108020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1079463, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8985307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.108036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1079433, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8865306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.108047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1079463, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8985307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.108062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1079435, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8865306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.108078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1079463, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8985307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.108088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1079435, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8865306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.108098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1079485, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9095306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.108113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1079435, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.8865306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.108123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1079485, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9095306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.108139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1079485, 'dev': 212, 'nlink': 1, 'atime': 1748131327.0, 'mtime': 1748131327.0, 'ctime': 1748143066.9095306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 04:12:06.108154 | orchestrator | 2025-05-25 04:12:06.108164 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-25 04:12:06.108174 | orchestrator | Sunday 25 May 2025 04:10:46 +0000 (0:00:35.949) 0:00:49.529 ************ 2025-05-25 04:12:06.108184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 04:12:06.108194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 04:12:06.108204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 04:12:06.108214 | orchestrator | 2025-05-25 04:12:06.108224 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-25 04:12:06.108234 | orchestrator | Sunday 25 May 2025 04:10:47 +0000 (0:00:00.965) 0:00:50.495 ************ 2025-05-25 04:12:06.108244 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:12:06.108254 | orchestrator | 2025-05-25 04:12:06.108263 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-25 04:12:06.108273 | orchestrator | Sunday 25 May 2025 04:10:49 +0000 (0:00:02.188) 0:00:52.684 ************ 2025-05-25 04:12:06.108282 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:12:06.108292 | orchestrator | 2025-05-25 04:12:06.108306 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-25 04:12:06.108316 | orchestrator | Sunday 25 May 2025 04:10:52 +0000 (0:00:02.569) 0:00:55.254 ************ 2025-05-25 04:12:06.108326 | orchestrator | 2025-05-25 04:12:06.108336 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-25 04:12:06.108345 | orchestrator | Sunday 25 May 2025 04:10:52 +0000 (0:00:00.071) 0:00:55.326 ************ 2025-05-25 04:12:06.108354 | orchestrator | 2025-05-25 04:12:06.108364 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-25 04:12:06.108373 | orchestrator | Sunday 25 May 2025 04:10:52 +0000 (0:00:00.096) 0:00:55.422 ************ 2025-05-25 04:12:06.108383 | orchestrator | 2025-05-25 04:12:06.108392 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-25 04:12:06.108402 | orchestrator | Sunday 25 May 2025 04:10:52 +0000 (0:00:00.068) 0:00:55.490 ************ 2025-05-25 04:12:06.108411 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:06.108421 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:06.108478 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:12:06.108488 | orchestrator | 2025-05-25 04:12:06.108498 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-25 04:12:06.108507 | orchestrator | Sunday 25 May 2025 04:10:54 +0000 (0:00:02.240) 0:00:57.733 ************ 2025-05-25 04:12:06.108531 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:06.108541 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:06.108551 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-25 04:12:06.108561 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-25 04:12:06.108571 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-05-25 04:12:06.108580 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:12:06.108590 | orchestrator | 2025-05-25 04:12:06.108600 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-25 04:12:06.108610 | orchestrator | Sunday 25 May 2025 04:11:32 +0000 (0:00:37.956) 0:01:35.689 ************ 2025-05-25 04:12:06.108619 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:06.108629 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:12:06.108638 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:12:06.108648 | orchestrator | 2025-05-25 04:12:06.108657 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-25 04:12:06.108667 | orchestrator | Sunday 25 May 2025 04:11:59 +0000 (0:00:26.865) 0:02:02.555 ************ 2025-05-25 04:12:06.108676 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:12:06.108686 | orchestrator | 2025-05-25 04:12:06.108695 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-25 04:12:06.108705 | orchestrator | Sunday 25 May 2025 04:12:01 +0000 (0:00:02.214) 0:02:04.769 ************ 2025-05-25 04:12:06.108714 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:06.108724 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:12:06.108734 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:12:06.108743 | orchestrator | 2025-05-25 04:12:06.108752 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-25 04:12:06.108762 | orchestrator | Sunday 25 May 2025 04:12:02 +0000 (0:00:00.307) 0:02:05.077 ************ 2025-05-25 04:12:06.108772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-25 04:12:06.108783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-25 04:12:06.108793 | orchestrator | 2025-05-25 04:12:06.108803 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-25 04:12:06.108812 | orchestrator | Sunday 25 May 2025 04:12:04 +0000 (0:00:02.145) 0:02:07.223 ************ 2025-05-25 04:12:06.108822 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:12:06.108831 | orchestrator | 2025-05-25 04:12:06.108841 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:12:06.108850 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-25 04:12:06.108861 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-25 04:12:06.108907 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-25 04:12:06.108924 | orchestrator | 2025-05-25 04:12:06.108933 | orchestrator | 2025-05-25 04:12:06.108943 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:12:06.108952 | orchestrator | Sunday 25 May 2025 04:12:04 +0000 (0:00:00.235) 0:02:07.459 ************ 2025-05-25 04:12:06.108962 | orchestrator | =============================================================================== 2025-05-25 04:12:06.108971 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 37.96s 2025-05-25 04:12:06.108981 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 35.95s 2025-05-25 04:12:06.108996 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 26.87s 2025-05-25 04:12:06.109007 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.57s 2025-05-25 04:12:06.109016 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.24s 2025-05-25 04:12:06.109026 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.21s 2025-05-25 04:12:06.109035 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.19s 2025-05-25 04:12:06.109045 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.15s 2025-05-25 04:12:06.109054 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.32s 2025-05-25 04:12:06.109064 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.26s 2025-05-25 04:12:06.109073 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.24s 2025-05-25 04:12:06.109082 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.21s 2025-05-25 04:12:06.109092 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.13s 2025-05-25 04:12:06.109101 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.97s 2025-05-25 04:12:06.109111 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.86s 2025-05-25 04:12:06.109120 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.85s 2025-05-25 04:12:06.109134 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.71s 2025-05-25 04:12:06.109144 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.71s 2025-05-25 04:12:06.109153 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.69s 2025-05-25 04:12:06.109163 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.68s 2025-05-25 04:12:09.148883 | orchestrator | 2025-05-25 04:12:09 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:09.148989 | orchestrator | 2025-05-25 04:12:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:12.198651 | orchestrator | 2025-05-25 04:12:12 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:12.198760 | orchestrator | 2025-05-25 04:12:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:15.247682 | orchestrator | 2025-05-25 04:12:15 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:15.247787 | orchestrator | 2025-05-25 04:12:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:18.295824 | orchestrator | 2025-05-25 04:12:18 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:18.295934 | orchestrator | 2025-05-25 04:12:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:21.336643 | orchestrator | 2025-05-25 04:12:21 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:21.336799 | orchestrator | 2025-05-25 04:12:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:24.384959 | orchestrator | 2025-05-25 04:12:24 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:24.385085 | orchestrator | 2025-05-25 04:12:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:27.429322 | orchestrator | 2025-05-25 04:12:27 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:27.429427 | orchestrator | 2025-05-25 04:12:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:30.483013 | orchestrator | 2025-05-25 04:12:30 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:30.483102 | orchestrator | 2025-05-25 04:12:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:33.530280 | orchestrator | 2025-05-25 04:12:33 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:33.530391 | orchestrator | 2025-05-25 04:12:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:36.576810 | orchestrator | 2025-05-25 04:12:36 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:36.576945 | orchestrator | 2025-05-25 04:12:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:39.624289 | orchestrator | 2025-05-25 04:12:39 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:39.624441 | orchestrator | 2025-05-25 04:12:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:42.675398 | orchestrator | 2025-05-25 04:12:42 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:42.675542 | orchestrator | 2025-05-25 04:12:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:45.731937 | orchestrator | 2025-05-25 04:12:45 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:45.732043 | orchestrator | 2025-05-25 04:12:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:48.790678 | orchestrator | 2025-05-25 04:12:48 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:48.790788 | orchestrator | 2025-05-25 04:12:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:51.848117 | orchestrator | 2025-05-25 04:12:51 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:51.848222 | orchestrator | 2025-05-25 04:12:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:54.903291 | orchestrator | 2025-05-25 04:12:54 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:54.903428 | orchestrator | 2025-05-25 04:12:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:12:57.949254 | orchestrator | 2025-05-25 04:12:57 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:12:57.949373 | orchestrator | 2025-05-25 04:12:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:01.006909 | orchestrator | 2025-05-25 04:13:01 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:01.007035 | orchestrator | 2025-05-25 04:13:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:04.069189 | orchestrator | 2025-05-25 04:13:04 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:04.069291 | orchestrator | 2025-05-25 04:13:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:07.126744 | orchestrator | 2025-05-25 04:13:07 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:07.126835 | orchestrator | 2025-05-25 04:13:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:10.176404 | orchestrator | 2025-05-25 04:13:10 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:10.176622 | orchestrator | 2025-05-25 04:13:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:13.219852 | orchestrator | 2025-05-25 04:13:13 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:13.219955 | orchestrator | 2025-05-25 04:13:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:16.261677 | orchestrator | 2025-05-25 04:13:16 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:16.261770 | orchestrator | 2025-05-25 04:13:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:19.303914 | orchestrator | 2025-05-25 04:13:19 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:19.304022 | orchestrator | 2025-05-25 04:13:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:22.357133 | orchestrator | 2025-05-25 04:13:22 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:22.357257 | orchestrator | 2025-05-25 04:13:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:25.408418 | orchestrator | 2025-05-25 04:13:25 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:25.408584 | orchestrator | 2025-05-25 04:13:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:28.458969 | orchestrator | 2025-05-25 04:13:28 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:28.459075 | orchestrator | 2025-05-25 04:13:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:31.507249 | orchestrator | 2025-05-25 04:13:31 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:31.507355 | orchestrator | 2025-05-25 04:13:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:34.553248 | orchestrator | 2025-05-25 04:13:34 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:34.553356 | orchestrator | 2025-05-25 04:13:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:37.592744 | orchestrator | 2025-05-25 04:13:37 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:37.592863 | orchestrator | 2025-05-25 04:13:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:40.641066 | orchestrator | 2025-05-25 04:13:40 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:40.641159 | orchestrator | 2025-05-25 04:13:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:43.691513 | orchestrator | 2025-05-25 04:13:43 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:43.691652 | orchestrator | 2025-05-25 04:13:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:46.737346 | orchestrator | 2025-05-25 04:13:46 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:46.737427 | orchestrator | 2025-05-25 04:13:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:49.788416 | orchestrator | 2025-05-25 04:13:49 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:49.788648 | orchestrator | 2025-05-25 04:13:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:52.840996 | orchestrator | 2025-05-25 04:13:52 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:52.841137 | orchestrator | 2025-05-25 04:13:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:55.886684 | orchestrator | 2025-05-25 04:13:55 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:55.886814 | orchestrator | 2025-05-25 04:13:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:13:58.930302 | orchestrator | 2025-05-25 04:13:58 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:13:58.930430 | orchestrator | 2025-05-25 04:13:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:14:01.980312 | orchestrator | 2025-05-25 04:14:01 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:14:01.980410 | orchestrator | 2025-05-25 04:14:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:14:05.027005 | orchestrator | 2025-05-25 04:14:05 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:14:05.027126 | orchestrator | 2025-05-25 04:14:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:14:08.090332 | orchestrator | 2025-05-25 04:14:08 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:14:08.090486 | orchestrator | 2025-05-25 04:14:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:14:11.135612 | orchestrator | 2025-05-25 04:14:11 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:14:11.135716 | orchestrator | 2025-05-25 04:14:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:14:14.180191 | orchestrator | 2025-05-25 04:14:14 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:14:14.180294 | orchestrator | 2025-05-25 04:14:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:14:17.226351 | orchestrator | 2025-05-25 04:14:17 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:14:17.226515 | orchestrator | 2025-05-25 04:14:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:14:20.275247 | orchestrator | 2025-05-25 04:14:20 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:14:20.275337 | orchestrator | 2025-05-25 04:14:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:14:23.319393 | orchestrator | 2025-05-25 04:14:23 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:14:23.319571 | orchestrator | 2025-05-25 04:14:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:14:26.363920 | orchestrator | 2025-05-25 04:14:26 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:14:26.364029 | orchestrator | 2025-05-25 04:14:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:14:29.410986 | orchestrator | 2025-05-25 04:14:29 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:14:29.411077 | orchestrator | 2025-05-25 04:14:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:14:32.468048 | orchestrator | 2025-05-25 04:14:32 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state STARTED 2025-05-25 04:14:32.468097 | orchestrator | 2025-05-25 04:14:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 04:14:35.523532 | orchestrator | 2025-05-25 04:14:35 | INFO  | Task 8d971383-7af0-4460-aa31-cdabb83173b7 is in state SUCCESS 2025-05-25 04:14:35.525788 | orchestrator | 2025-05-25 04:14:35.526906 | orchestrator | 2025-05-25 04:14:35.526941 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:14:35.526961 | orchestrator | 2025-05-25 04:14:35.526973 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:14:35.526985 | orchestrator | Sunday 25 May 2025 04:10:07 +0000 (0:00:00.259) 0:00:00.259 ************ 2025-05-25 04:14:35.527021 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:14:35.527035 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:14:35.527046 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:14:35.527057 | orchestrator | 2025-05-25 04:14:35.527096 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:14:35.527108 | orchestrator | Sunday 25 May 2025 04:10:08 +0000 (0:00:00.285) 0:00:00.544 ************ 2025-05-25 04:14:35.527118 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-25 04:14:35.527130 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-25 04:14:35.527142 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-25 04:14:35.527153 | orchestrator | 2025-05-25 04:14:35.527164 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-25 04:14:35.527174 | orchestrator | 2025-05-25 04:14:35.527185 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-25 04:14:35.527196 | orchestrator | Sunday 25 May 2025 04:10:08 +0000 (0:00:00.409) 0:00:00.954 ************ 2025-05-25 04:14:35.527208 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:14:35.527220 | orchestrator | 2025-05-25 04:14:35.527231 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-25 04:14:35.527242 | orchestrator | Sunday 25 May 2025 04:10:09 +0000 (0:00:00.523) 0:00:01.477 ************ 2025-05-25 04:14:35.527253 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-25 04:14:35.527263 | orchestrator | 2025-05-25 04:14:35.527274 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-25 04:14:35.527285 | orchestrator | Sunday 25 May 2025 04:10:12 +0000 (0:00:03.123) 0:00:04.600 ************ 2025-05-25 04:14:35.527310 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-25 04:14:35.527322 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-25 04:14:35.527332 | orchestrator | 2025-05-25 04:14:35.527343 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-25 04:14:35.527354 | orchestrator | Sunday 25 May 2025 04:10:18 +0000 (0:00:06.149) 0:00:10.750 ************ 2025-05-25 04:14:35.527365 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-25 04:14:35.527376 | orchestrator | 2025-05-25 04:14:35.527387 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-25 04:14:35.527397 | orchestrator | Sunday 25 May 2025 04:10:21 +0000 (0:00:03.104) 0:00:13.854 ************ 2025-05-25 04:14:35.527432 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-25 04:14:35.527444 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-25 04:14:35.527455 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-25 04:14:35.527466 | orchestrator | 2025-05-25 04:14:35.527477 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-25 04:14:35.527490 | orchestrator | Sunday 25 May 2025 04:10:29 +0000 (0:00:08.087) 0:00:21.942 ************ 2025-05-25 04:14:35.527503 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-25 04:14:35.527514 | orchestrator | 2025-05-25 04:14:35.527527 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-25 04:14:35.527539 | orchestrator | Sunday 25 May 2025 04:10:32 +0000 (0:00:03.178) 0:00:25.120 ************ 2025-05-25 04:14:35.527551 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-25 04:14:35.527563 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-25 04:14:35.527576 | orchestrator | 2025-05-25 04:14:35.527590 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-25 04:14:35.527602 | orchestrator | Sunday 25 May 2025 04:10:40 +0000 (0:00:07.319) 0:00:32.440 ************ 2025-05-25 04:14:35.527614 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-25 04:14:35.527636 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-25 04:14:35.527649 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-25 04:14:35.527661 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-25 04:14:35.527674 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-25 04:14:35.527686 | orchestrator | 2025-05-25 04:14:35.527699 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-25 04:14:35.527711 | orchestrator | Sunday 25 May 2025 04:10:54 +0000 (0:00:14.903) 0:00:47.343 ************ 2025-05-25 04:14:35.527721 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:14:35.527732 | orchestrator | 2025-05-25 04:14:35.527743 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-25 04:14:35.527753 | orchestrator | Sunday 25 May 2025 04:10:56 +0000 (0:00:01.343) 0:00:48.687 ************ 2025-05-25 04:14:35.527764 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.527775 | orchestrator | 2025-05-25 04:14:35.527786 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-05-25 04:14:35.527797 | orchestrator | Sunday 25 May 2025 04:11:01 +0000 (0:00:05.066) 0:00:53.753 ************ 2025-05-25 04:14:35.527808 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.527818 | orchestrator | 2025-05-25 04:14:35.527829 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-05-25 04:14:35.527900 | orchestrator | Sunday 25 May 2025 04:11:05 +0000 (0:00:04.016) 0:00:57.770 ************ 2025-05-25 04:14:35.527914 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:14:35.527925 | orchestrator | 2025-05-25 04:14:35.527937 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-05-25 04:14:35.527948 | orchestrator | Sunday 25 May 2025 04:11:08 +0000 (0:00:03.004) 0:01:00.774 ************ 2025-05-25 04:14:35.527959 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-05-25 04:14:35.527970 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-05-25 04:14:35.527981 | orchestrator | 2025-05-25 04:14:35.527992 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-05-25 04:14:35.528003 | orchestrator | Sunday 25 May 2025 04:11:19 +0000 (0:00:10.729) 0:01:11.504 ************ 2025-05-25 04:14:35.528013 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-05-25 04:14:35.528025 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-05-25 04:14:35.528038 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-05-25 04:14:35.528050 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-05-25 04:14:35.528061 | orchestrator | 2025-05-25 04:14:35.528072 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-05-25 04:14:35.528083 | orchestrator | Sunday 25 May 2025 04:11:33 +0000 (0:00:14.882) 0:01:26.387 ************ 2025-05-25 04:14:35.528094 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.528105 | orchestrator | 2025-05-25 04:14:35.528115 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-05-25 04:14:35.528126 | orchestrator | Sunday 25 May 2025 04:11:39 +0000 (0:00:05.117) 0:01:31.505 ************ 2025-05-25 04:14:35.528137 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.528148 | orchestrator | 2025-05-25 04:14:35.528165 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-05-25 04:14:35.528176 | orchestrator | Sunday 25 May 2025 04:11:44 +0000 (0:00:05.074) 0:01:36.580 ************ 2025-05-25 04:14:35.528187 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:14:35.528197 | orchestrator | 2025-05-25 04:14:35.528216 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-05-25 04:14:35.528226 | orchestrator | Sunday 25 May 2025 04:11:44 +0000 (0:00:00.212) 0:01:36.793 ************ 2025-05-25 04:14:35.528237 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.528248 | orchestrator | 2025-05-25 04:14:35.528259 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-25 04:14:35.528270 | orchestrator | Sunday 25 May 2025 04:11:48 +0000 (0:00:04.109) 0:01:40.902 ************ 2025-05-25 04:14:35.528281 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:14:35.528292 | orchestrator | 2025-05-25 04:14:35.528302 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-05-25 04:14:35.528313 | orchestrator | Sunday 25 May 2025 04:11:49 +0000 (0:00:01.175) 0:01:42.077 ************ 2025-05-25 04:14:35.528324 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:14:35.528335 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.528345 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:14:35.528356 | orchestrator | 2025-05-25 04:14:35.528367 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-05-25 04:14:35.528378 | orchestrator | Sunday 25 May 2025 04:11:54 +0000 (0:00:04.927) 0:01:47.004 ************ 2025-05-25 04:14:35.528389 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.528400 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:14:35.528442 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:14:35.528453 | orchestrator | 2025-05-25 04:14:35.528464 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-05-25 04:14:35.528475 | orchestrator | Sunday 25 May 2025 04:11:59 +0000 (0:00:04.415) 0:01:51.420 ************ 2025-05-25 04:14:35.528485 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.528496 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:14:35.528507 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:14:35.528518 | orchestrator | 2025-05-25 04:14:35.528529 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-05-25 04:14:35.528540 | orchestrator | Sunday 25 May 2025 04:11:59 +0000 (0:00:00.703) 0:01:52.124 ************ 2025-05-25 04:14:35.528551 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:14:35.528562 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:14:35.528572 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:14:35.528583 | orchestrator | 2025-05-25 04:14:35.528594 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-05-25 04:14:35.528605 | orchestrator | Sunday 25 May 2025 04:12:01 +0000 (0:00:01.976) 0:01:54.100 ************ 2025-05-25 04:14:35.528615 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:14:35.528626 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:14:35.528637 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.528648 | orchestrator | 2025-05-25 04:14:35.528659 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-05-25 04:14:35.528669 | orchestrator | Sunday 25 May 2025 04:12:02 +0000 (0:00:01.251) 0:01:55.352 ************ 2025-05-25 04:14:35.528680 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.528691 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:14:35.528702 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:14:35.528712 | orchestrator | 2025-05-25 04:14:35.528723 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-05-25 04:14:35.528734 | orchestrator | Sunday 25 May 2025 04:12:04 +0000 (0:00:01.184) 0:01:56.536 ************ 2025-05-25 04:14:35.528745 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.528755 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:14:35.528767 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:14:35.528777 | orchestrator | 2025-05-25 04:14:35.528827 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-05-25 04:14:35.528841 | orchestrator | Sunday 25 May 2025 04:12:06 +0000 (0:00:01.946) 0:01:58.483 ************ 2025-05-25 04:14:35.528852 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.528870 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:14:35.528881 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:14:35.528892 | orchestrator | 2025-05-25 04:14:35.528903 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-05-25 04:14:35.528913 | orchestrator | Sunday 25 May 2025 04:12:07 +0000 (0:00:01.685) 0:02:00.168 ************ 2025-05-25 04:14:35.528924 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:14:35.528935 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:14:35.528946 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:14:35.528956 | orchestrator | 2025-05-25 04:14:35.528967 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-05-25 04:14:35.528978 | orchestrator | Sunday 25 May 2025 04:12:08 +0000 (0:00:00.610) 0:02:00.779 ************ 2025-05-25 04:14:35.528989 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:14:35.528999 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:14:35.529010 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:14:35.529021 | orchestrator | 2025-05-25 04:14:35.529031 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-25 04:14:35.529042 | orchestrator | Sunday 25 May 2025 04:12:11 +0000 (0:00:02.716) 0:02:03.496 ************ 2025-05-25 04:14:35.529053 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:14:35.529063 | orchestrator | 2025-05-25 04:14:35.529074 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-05-25 04:14:35.529085 | orchestrator | Sunday 25 May 2025 04:12:11 +0000 (0:00:00.696) 0:02:04.193 ************ 2025-05-25 04:14:35.529095 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:14:35.529106 | orchestrator | 2025-05-25 04:14:35.529117 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-05-25 04:14:35.529128 | orchestrator | Sunday 25 May 2025 04:12:15 +0000 (0:00:03.979) 0:02:08.172 ************ 2025-05-25 04:14:35.529138 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:14:35.529149 | orchestrator | 2025-05-25 04:14:35.529165 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-05-25 04:14:35.529176 | orchestrator | Sunday 25 May 2025 04:12:18 +0000 (0:00:03.010) 0:02:11.182 ************ 2025-05-25 04:14:35.529187 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-05-25 04:14:35.529198 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-05-25 04:14:35.529209 | orchestrator | 2025-05-25 04:14:35.529220 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-05-25 04:14:35.529231 | orchestrator | Sunday 25 May 2025 04:12:25 +0000 (0:00:06.941) 0:02:18.124 ************ 2025-05-25 04:14:35.529241 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:14:35.529252 | orchestrator | 2025-05-25 04:14:35.529263 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-05-25 04:14:35.529274 | orchestrator | Sunday 25 May 2025 04:12:29 +0000 (0:00:03.327) 0:02:21.452 ************ 2025-05-25 04:14:35.529285 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:14:35.529296 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:14:35.529306 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:14:35.529317 | orchestrator | 2025-05-25 04:14:35.529328 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-05-25 04:14:35.529339 | orchestrator | Sunday 25 May 2025 04:12:29 +0000 (0:00:00.306) 0:02:21.758 ************ 2025-05-25 04:14:35.529353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 04:14:35.529426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 04:14:35.529441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 04:14:35.529459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-25 04:14:35.529472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-25 04:14:35.529484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-25 04:14:35.529496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.529515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.529559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.529574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.529590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.529609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.529731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:14:35.529774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:14:35.529795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:14:35.529816 | orchestrator | 2025-05-25 04:14:35.529834 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-05-25 04:14:35.529853 | orchestrator | Sunday 25 May 2025 04:12:31 +0000 (0:00:02.519) 0:02:24.278 ************ 2025-05-25 04:14:35.529873 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:14:35.529892 | orchestrator | 2025-05-25 04:14:35.529960 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-05-25 04:14:35.529974 | orchestrator | Sunday 25 May 2025 04:12:32 +0000 (0:00:00.330) 0:02:24.608 ************ 2025-05-25 04:14:35.529985 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:14:35.529996 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:14:35.530007 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:14:35.530052 | orchestrator | 2025-05-25 04:14:35.530066 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-05-25 04:14:35.530077 | orchestrator | Sunday 25 May 2025 04:12:32 +0000 (0:00:00.304) 0:02:24.913 ************ 2025-05-25 04:14:35.530089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-25 04:14:35.530109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 04:14:35.530121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.530143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.530155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:14:35.530167 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:14:35.530214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-25 04:14:35.530228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 04:14:35.530251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.530263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.530286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:14:35.530298 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:14:35.530310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-25 04:14:35.530356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 04:14:35.530369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.530380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.530397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:14:35.530445 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:14:35.530458 | orchestrator | 2025-05-25 04:14:35.530470 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-25 04:14:35.530481 | orchestrator | Sunday 25 May 2025 04:12:33 +0000 (0:00:00.655) 0:02:25.568 ************ 2025-05-25 04:14:35.530492 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:14:35.530503 | orchestrator | 2025-05-25 04:14:35.530514 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-05-25 04:14:35.530524 | orchestrator | Sunday 25 May 2025 04:12:33 +0000 (0:00:00.543) 0:02:26.112 ************ 2025-05-25 04:14:35.530536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 04:14:35.530582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 04:14:35.530652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 04:14:35.530674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-25 04:14:35.530694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-25 04:14:35.530706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-25 04:14:35.530717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.530729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.530747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.530759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.530782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.530802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.530821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:14:35.530839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:14:35.530873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:14:35.530892 | orchestrator | 2025-05-25 04:14:35.530910 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-05-25 04:14:35.530928 | orchestrator | Sunday 25 May 2025 04:12:38 +0000 (0:00:04.979) 0:02:31.091 ************ 2025-05-25 04:14:35.530948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-25 04:14:35.530985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 04:14:35.531006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.531025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.531046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:14:35.531065 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:14:35.531096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-25 04:14:35.531109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 04:14:35.531128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.531145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.531157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:14:35.531168 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:14:35.531180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-25 04:14:35.531197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 04:14:35.531209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.531227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.531243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:14:35.531255 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:14:35.531266 | orchestrator | 2025-05-25 04:14:35.531277 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-05-25 04:14:35.531288 | orchestrator | Sunday 25 May 2025 04:12:39 +0000 (0:00:00.656) 0:02:31.748 ************ 2025-05-25 04:14:35.531299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-25 04:14:35.531311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 04:14:35.531322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.531487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.531537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:14:35.531559 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:14:35.531590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-25 04:14:35.531604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 04:14:35.531615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.531627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.531647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:14:35.531668 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:14:35.531680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-25 04:14:35.531697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 04:14:35.531708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.531720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 04:14:35.531731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 04:14:35.531742 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:14:35.531753 | orchestrator | 2025-05-25 04:14:35.531764 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-05-25 04:14:35.531775 | orchestrator | Sunday 25 May 2025 04:12:40 +0000 (0:00:00.864) 0:02:32.612 ************ 2025-05-25 04:14:35.531802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 04:14:35.531819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 04:14:35.531831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 04:14:35.531843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-25 04:14:35.531854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-25 04:14:35.531866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-25 04:14:35.531893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.531905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.531921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.531933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.531944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.531956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.531980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:14:35.531991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:14:35.532001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:14:35.532011 | orchestrator | 2025-05-25 04:14:35.532021 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-05-25 04:14:35.532031 | orchestrator | Sunday 25 May 2025 04:12:45 +0000 (0:00:05.087) 0:02:37.699 ************ 2025-05-25 04:14:35.532045 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-25 04:14:35.532056 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-25 04:14:35.532066 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-25 04:14:35.532075 | orchestrator | 2025-05-25 04:14:35.532085 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-05-25 04:14:35.532095 | orchestrator | Sunday 25 May 2025 04:12:46 +0000 (0:00:01.577) 0:02:39.277 ************ 2025-05-25 04:14:35.532105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 04:14:35.532116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 04:14:35.532138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 04:14:35.532149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-25 04:14:35.532163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-25 04:14:35.532174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-25 04:14:35.532184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.532194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.532261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.532279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.532290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.532305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.532315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:14:35.532326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:14:35.532341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:14:35.532351 | orchestrator | 2025-05-25 04:14:35.532361 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-05-25 04:14:35.532371 | orchestrator | Sunday 25 May 2025 04:13:02 +0000 (0:00:15.815) 0:02:55.093 ************ 2025-05-25 04:14:35.532381 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.532391 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:14:35.532401 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:14:35.532463 | orchestrator | 2025-05-25 04:14:35.532474 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-05-25 04:14:35.532483 | orchestrator | Sunday 25 May 2025 04:13:04 +0000 (0:00:01.465) 0:02:56.559 ************ 2025-05-25 04:14:35.532493 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-25 04:14:35.532503 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-25 04:14:35.532518 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-25 04:14:35.532528 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-25 04:14:35.532538 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-25 04:14:35.532548 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-25 04:14:35.532558 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-25 04:14:35.532567 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-25 04:14:35.532577 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-25 04:14:35.532587 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-25 04:14:35.532596 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-25 04:14:35.532606 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-25 04:14:35.532615 | orchestrator | 2025-05-25 04:14:35.532625 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-05-25 04:14:35.532635 | orchestrator | Sunday 25 May 2025 04:13:09 +0000 (0:00:05.279) 0:03:01.838 ************ 2025-05-25 04:14:35.532645 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-25 04:14:35.532655 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-25 04:14:35.532664 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-25 04:14:35.532674 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-25 04:14:35.532684 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-25 04:14:35.532694 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-25 04:14:35.532703 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-25 04:14:35.532713 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-25 04:14:35.532723 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-25 04:14:35.532733 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-25 04:14:35.532747 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-25 04:14:35.532757 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-25 04:14:35.532774 | orchestrator | 2025-05-25 04:14:35.532782 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-05-25 04:14:35.532790 | orchestrator | Sunday 25 May 2025 04:13:14 +0000 (0:00:04.768) 0:03:06.607 ************ 2025-05-25 04:14:35.532798 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-25 04:14:35.532805 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-25 04:14:35.532814 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-25 04:14:35.532821 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-25 04:14:35.532830 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-25 04:14:35.532837 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-25 04:14:35.532845 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-25 04:14:35.532853 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-25 04:14:35.532861 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-25 04:14:35.532869 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-25 04:14:35.532877 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-25 04:14:35.532885 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-25 04:14:35.532893 | orchestrator | 2025-05-25 04:14:35.532901 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-05-25 04:14:35.532909 | orchestrator | Sunday 25 May 2025 04:13:19 +0000 (0:00:04.964) 0:03:11.571 ************ 2025-05-25 04:14:35.532917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 04:14:35.532932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 04:14:35.532941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 04:14:35.532960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-25 04:14:35.532968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-25 04:14:35.532977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-25 04:14:35.532985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.532997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.533006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.533014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.533033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.533042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-25 04:14:35.533050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:14:35.533059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:14:35.533073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-25 04:14:35.533082 | orchestrator | 2025-05-25 04:14:35.533090 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-25 04:14:35.533098 | orchestrator | Sunday 25 May 2025 04:13:22 +0000 (0:00:03.524) 0:03:15.096 ************ 2025-05-25 04:14:35.533106 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:14:35.533115 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:14:35.533123 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:14:35.533131 | orchestrator | 2025-05-25 04:14:35.533145 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-05-25 04:14:35.533153 | orchestrator | Sunday 25 May 2025 04:13:23 +0000 (0:00:00.315) 0:03:15.412 ************ 2025-05-25 04:14:35.533161 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.533168 | orchestrator | 2025-05-25 04:14:35.533176 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-05-25 04:14:35.533184 | orchestrator | Sunday 25 May 2025 04:13:25 +0000 (0:00:02.432) 0:03:17.844 ************ 2025-05-25 04:14:35.533192 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.533200 | orchestrator | 2025-05-25 04:14:35.533208 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-05-25 04:14:35.533216 | orchestrator | Sunday 25 May 2025 04:13:27 +0000 (0:00:02.008) 0:03:19.852 ************ 2025-05-25 04:14:35.533224 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.533232 | orchestrator | 2025-05-25 04:14:35.533240 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-05-25 04:14:35.533248 | orchestrator | Sunday 25 May 2025 04:13:29 +0000 (0:00:02.048) 0:03:21.901 ************ 2025-05-25 04:14:35.533256 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.533264 | orchestrator | 2025-05-25 04:14:35.533273 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-05-25 04:14:35.533281 | orchestrator | Sunday 25 May 2025 04:13:31 +0000 (0:00:02.012) 0:03:23.914 ************ 2025-05-25 04:14:35.533289 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.533297 | orchestrator | 2025-05-25 04:14:35.533309 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-25 04:14:35.533317 | orchestrator | Sunday 25 May 2025 04:13:51 +0000 (0:00:19.515) 0:03:43.430 ************ 2025-05-25 04:14:35.533325 | orchestrator | 2025-05-25 04:14:35.533333 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-25 04:14:35.533341 | orchestrator | Sunday 25 May 2025 04:13:51 +0000 (0:00:00.074) 0:03:43.504 ************ 2025-05-25 04:14:35.533349 | orchestrator | 2025-05-25 04:14:35.533357 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-25 04:14:35.533365 | orchestrator | Sunday 25 May 2025 04:13:51 +0000 (0:00:00.068) 0:03:43.572 ************ 2025-05-25 04:14:35.533373 | orchestrator | 2025-05-25 04:14:35.533380 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-05-25 04:14:35.533388 | orchestrator | Sunday 25 May 2025 04:13:51 +0000 (0:00:00.062) 0:03:43.635 ************ 2025-05-25 04:14:35.533396 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.533404 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:14:35.533425 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:14:35.533434 | orchestrator | 2025-05-25 04:14:35.533442 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-05-25 04:14:35.533449 | orchestrator | Sunday 25 May 2025 04:14:06 +0000 (0:00:15.457) 0:03:59.092 ************ 2025-05-25 04:14:35.533457 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.533466 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:14:35.533474 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:14:35.533482 | orchestrator | 2025-05-25 04:14:35.533490 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-05-25 04:14:35.533498 | orchestrator | Sunday 25 May 2025 04:14:18 +0000 (0:00:11.548) 0:04:10.641 ************ 2025-05-25 04:14:35.533506 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.533514 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:14:35.533522 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:14:35.533530 | orchestrator | 2025-05-25 04:14:35.533538 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-05-25 04:14:35.533546 | orchestrator | Sunday 25 May 2025 04:14:23 +0000 (0:00:05.376) 0:04:16.018 ************ 2025-05-25 04:14:35.533554 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.533562 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:14:35.533570 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:14:35.533583 | orchestrator | 2025-05-25 04:14:35.533591 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-05-25 04:14:35.533599 | orchestrator | Sunday 25 May 2025 04:14:29 +0000 (0:00:05.523) 0:04:21.541 ************ 2025-05-25 04:14:35.533607 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:14:35.533615 | orchestrator | changed: [testbed-node-2] 2025-05-25 04:14:35.533622 | orchestrator | changed: [testbed-node-1] 2025-05-25 04:14:35.533630 | orchestrator | 2025-05-25 04:14:35.533638 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:14:35.533646 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-25 04:14:35.533655 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 04:14:35.533664 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 04:14:35.533672 | orchestrator | 2025-05-25 04:14:35.533680 | orchestrator | 2025-05-25 04:14:35.533688 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:14:35.533696 | orchestrator | Sunday 25 May 2025 04:14:34 +0000 (0:00:05.195) 0:04:26.737 ************ 2025-05-25 04:14:35.533709 | orchestrator | =============================================================================== 2025-05-25 04:14:35.533717 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 19.52s 2025-05-25 04:14:35.533725 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.82s 2025-05-25 04:14:35.533733 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.46s 2025-05-25 04:14:35.533741 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.90s 2025-05-25 04:14:35.533749 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.88s 2025-05-25 04:14:35.533757 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.55s 2025-05-25 04:14:35.533765 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.73s 2025-05-25 04:14:35.533773 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.09s 2025-05-25 04:14:35.533781 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.32s 2025-05-25 04:14:35.533789 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.94s 2025-05-25 04:14:35.533798 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.15s 2025-05-25 04:14:35.533811 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.52s 2025-05-25 04:14:35.533824 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.38s 2025-05-25 04:14:35.533836 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.28s 2025-05-25 04:14:35.533850 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.20s 2025-05-25 04:14:35.533864 | orchestrator | octavia : Create loadbalancer management network ------------------------ 5.12s 2025-05-25 04:14:35.533877 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.09s 2025-05-25 04:14:35.533888 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.07s 2025-05-25 04:14:35.533901 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.07s 2025-05-25 04:14:35.533909 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 4.98s 2025-05-25 04:14:35.533917 | orchestrator | 2025-05-25 04:14:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:14:38.560587 | orchestrator | 2025-05-25 04:14:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:14:41.604168 | orchestrator | 2025-05-25 04:14:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:14:44.656010 | orchestrator | 2025-05-25 04:14:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:14:47.694544 | orchestrator | 2025-05-25 04:14:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:14:50.742167 | orchestrator | 2025-05-25 04:14:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:14:53.787436 | orchestrator | 2025-05-25 04:14:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:14:56.828694 | orchestrator | 2025-05-25 04:14:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:14:59.868300 | orchestrator | 2025-05-25 04:14:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:15:02.915380 | orchestrator | 2025-05-25 04:15:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:15:05.959479 | orchestrator | 2025-05-25 04:15:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:15:08.999658 | orchestrator | 2025-05-25 04:15:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:15:12.049223 | orchestrator | 2025-05-25 04:15:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:15:15.095238 | orchestrator | 2025-05-25 04:15:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:15:18.142124 | orchestrator | 2025-05-25 04:15:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:15:21.189179 | orchestrator | 2025-05-25 04:15:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:15:24.233158 | orchestrator | 2025-05-25 04:15:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:15:27.272105 | orchestrator | 2025-05-25 04:15:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:15:30.327042 | orchestrator | 2025-05-25 04:15:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:15:33.374869 | orchestrator | 2025-05-25 04:15:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-25 04:15:36.417033 | orchestrator | 2025-05-25 04:15:36.663339 | orchestrator | 2025-05-25 04:15:36.668898 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sun May 25 04:15:36 UTC 2025 2025-05-25 04:15:36.669084 | orchestrator | 2025-05-25 04:15:37.142446 | orchestrator | ok: Runtime: 0:33:31.254419 2025-05-25 04:15:37.408088 | 2025-05-25 04:15:37.408276 | TASK [Bootstrap services] 2025-05-25 04:15:38.185844 | orchestrator | 2025-05-25 04:15:38.186084 | orchestrator | # BOOTSTRAP 2025-05-25 04:15:38.186112 | orchestrator | 2025-05-25 04:15:38.186126 | orchestrator | + set -e 2025-05-25 04:15:38.186139 | orchestrator | + echo 2025-05-25 04:15:38.186153 | orchestrator | + echo '# BOOTSTRAP' 2025-05-25 04:15:38.186171 | orchestrator | + echo 2025-05-25 04:15:38.186216 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-05-25 04:15:38.194657 | orchestrator | + set -e 2025-05-25 04:15:38.194713 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-05-25 04:15:40.114012 | orchestrator | 2025-05-25 04:15:40 | INFO  | It takes a moment until task 1a115c9d-9403-418e-883c-76b243373f1d (flavor-manager) has been started and output is visible here. 2025-05-25 04:15:43.916558 | orchestrator | 2025-05-25 04:15:43 | INFO  | Flavor SCS-1V-4 created 2025-05-25 04:15:44.174951 | orchestrator | 2025-05-25 04:15:44 | INFO  | Flavor SCS-2V-8 created 2025-05-25 04:15:44.324230 | orchestrator | 2025-05-25 04:15:44 | INFO  | Flavor SCS-4V-16 created 2025-05-25 04:15:44.467001 | orchestrator | 2025-05-25 04:15:44 | INFO  | Flavor SCS-8V-32 created 2025-05-25 04:15:44.605148 | orchestrator | 2025-05-25 04:15:44 | INFO  | Flavor SCS-1V-2 created 2025-05-25 04:15:44.735995 | orchestrator | 2025-05-25 04:15:44 | INFO  | Flavor SCS-2V-4 created 2025-05-25 04:15:44.849012 | orchestrator | 2025-05-25 04:15:44 | INFO  | Flavor SCS-4V-8 created 2025-05-25 04:15:45.003921 | orchestrator | 2025-05-25 04:15:44 | INFO  | Flavor SCS-8V-16 created 2025-05-25 04:15:45.131475 | orchestrator | 2025-05-25 04:15:45 | INFO  | Flavor SCS-16V-32 created 2025-05-25 04:15:45.279561 | orchestrator | 2025-05-25 04:15:45 | INFO  | Flavor SCS-1V-8 created 2025-05-25 04:15:45.403148 | orchestrator | 2025-05-25 04:15:45 | INFO  | Flavor SCS-2V-16 created 2025-05-25 04:15:45.520548 | orchestrator | 2025-05-25 04:15:45 | INFO  | Flavor SCS-4V-32 created 2025-05-25 04:15:45.656264 | orchestrator | 2025-05-25 04:15:45 | INFO  | Flavor SCS-1L-1 created 2025-05-25 04:15:45.786198 | orchestrator | 2025-05-25 04:15:45 | INFO  | Flavor SCS-2V-4-20s created 2025-05-25 04:15:45.914668 | orchestrator | 2025-05-25 04:15:45 | INFO  | Flavor SCS-4V-16-100s created 2025-05-25 04:15:46.055116 | orchestrator | 2025-05-25 04:15:46 | INFO  | Flavor SCS-1V-4-10 created 2025-05-25 04:15:46.184627 | orchestrator | 2025-05-25 04:15:46 | INFO  | Flavor SCS-2V-8-20 created 2025-05-25 04:15:46.324753 | orchestrator | 2025-05-25 04:15:46 | INFO  | Flavor SCS-4V-16-50 created 2025-05-25 04:15:46.468523 | orchestrator | 2025-05-25 04:15:46 | INFO  | Flavor SCS-8V-32-100 created 2025-05-25 04:15:46.602889 | orchestrator | 2025-05-25 04:15:46 | INFO  | Flavor SCS-1V-2-5 created 2025-05-25 04:15:46.721802 | orchestrator | 2025-05-25 04:15:46 | INFO  | Flavor SCS-2V-4-10 created 2025-05-25 04:15:46.845632 | orchestrator | 2025-05-25 04:15:46 | INFO  | Flavor SCS-4V-8-20 created 2025-05-25 04:15:46.980730 | orchestrator | 2025-05-25 04:15:46 | INFO  | Flavor SCS-8V-16-50 created 2025-05-25 04:15:47.128960 | orchestrator | 2025-05-25 04:15:47 | INFO  | Flavor SCS-16V-32-100 created 2025-05-25 04:15:47.238692 | orchestrator | 2025-05-25 04:15:47 | INFO  | Flavor SCS-1V-8-20 created 2025-05-25 04:15:47.375992 | orchestrator | 2025-05-25 04:15:47 | INFO  | Flavor SCS-2V-16-50 created 2025-05-25 04:15:47.514311 | orchestrator | 2025-05-25 04:15:47 | INFO  | Flavor SCS-4V-32-100 created 2025-05-25 04:15:47.645773 | orchestrator | 2025-05-25 04:15:47 | INFO  | Flavor SCS-1L-1-5 created 2025-05-25 04:15:49.791111 | orchestrator | 2025-05-25 04:15:49 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-05-25 04:15:49.852131 | orchestrator | 2025-05-25 04:15:49 | INFO  | Task 3f7bbb52-dc0f-4994-a76c-0c258cb26496 (bootstrap-basic) was prepared for execution. 2025-05-25 04:15:49.852262 | orchestrator | 2025-05-25 04:15:49 | INFO  | It takes a moment until task 3f7bbb52-dc0f-4994-a76c-0c258cb26496 (bootstrap-basic) has been started and output is visible here. 2025-05-25 04:15:53.762709 | orchestrator | 2025-05-25 04:15:53.763239 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-05-25 04:15:53.764509 | orchestrator | 2025-05-25 04:15:53.766359 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 04:15:53.766938 | orchestrator | Sunday 25 May 2025 04:15:53 +0000 (0:00:00.071) 0:00:00.071 ************ 2025-05-25 04:15:55.589176 | orchestrator | ok: [localhost] 2025-05-25 04:15:55.590748 | orchestrator | 2025-05-25 04:15:55.590783 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-05-25 04:15:55.591519 | orchestrator | Sunday 25 May 2025 04:15:55 +0000 (0:00:01.828) 0:00:01.900 ************ 2025-05-25 04:16:04.924039 | orchestrator | ok: [localhost] 2025-05-25 04:16:04.924169 | orchestrator | 2025-05-25 04:16:04.924196 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-05-25 04:16:04.924653 | orchestrator | Sunday 25 May 2025 04:16:04 +0000 (0:00:09.333) 0:00:11.234 ************ 2025-05-25 04:16:12.151197 | orchestrator | changed: [localhost] 2025-05-25 04:16:12.151310 | orchestrator | 2025-05-25 04:16:12.152733 | orchestrator | TASK [Get volume type local] *************************************************** 2025-05-25 04:16:12.152847 | orchestrator | Sunday 25 May 2025 04:16:12 +0000 (0:00:07.227) 0:00:18.461 ************ 2025-05-25 04:16:19.231825 | orchestrator | ok: [localhost] 2025-05-25 04:16:19.232939 | orchestrator | 2025-05-25 04:16:19.233023 | orchestrator | TASK [Create volume type local] ************************************************ 2025-05-25 04:16:19.233041 | orchestrator | Sunday 25 May 2025 04:16:19 +0000 (0:00:07.080) 0:00:25.542 ************ 2025-05-25 04:16:25.699051 | orchestrator | changed: [localhost] 2025-05-25 04:16:25.699169 | orchestrator | 2025-05-25 04:16:25.699188 | orchestrator | TASK [Create public network] *************************************************** 2025-05-25 04:16:25.699728 | orchestrator | Sunday 25 May 2025 04:16:25 +0000 (0:00:06.466) 0:00:32.009 ************ 2025-05-25 04:16:30.807120 | orchestrator | changed: [localhost] 2025-05-25 04:16:30.810522 | orchestrator | 2025-05-25 04:16:30.811620 | orchestrator | TASK [Set public network to default] ******************************************* 2025-05-25 04:16:30.812845 | orchestrator | Sunday 25 May 2025 04:16:30 +0000 (0:00:05.107) 0:00:37.116 ************ 2025-05-25 04:16:36.704816 | orchestrator | changed: [localhost] 2025-05-25 04:16:36.705408 | orchestrator | 2025-05-25 04:16:36.706799 | orchestrator | TASK [Create public subnet] **************************************************** 2025-05-25 04:16:36.708019 | orchestrator | Sunday 25 May 2025 04:16:36 +0000 (0:00:05.895) 0:00:43.012 ************ 2025-05-25 04:16:40.754621 | orchestrator | changed: [localhost] 2025-05-25 04:16:40.755525 | orchestrator | 2025-05-25 04:16:40.756484 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-05-25 04:16:40.757439 | orchestrator | Sunday 25 May 2025 04:16:40 +0000 (0:00:04.052) 0:00:47.064 ************ 2025-05-25 04:16:44.462221 | orchestrator | changed: [localhost] 2025-05-25 04:16:44.462331 | orchestrator | 2025-05-25 04:16:44.463090 | orchestrator | TASK [Create manager role] ***************************************************** 2025-05-25 04:16:44.463906 | orchestrator | Sunday 25 May 2025 04:16:44 +0000 (0:00:03.708) 0:00:50.773 ************ 2025-05-25 04:16:47.924433 | orchestrator | ok: [localhost] 2025-05-25 04:16:47.924548 | orchestrator | 2025-05-25 04:16:47.927672 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:16:47.927734 | orchestrator | 2025-05-25 04:16:47 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 04:16:47.927745 | orchestrator | 2025-05-25 04:16:47 | INFO  | Please wait and do not abort execution. 2025-05-25 04:16:47.928546 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 04:16:47.929836 | orchestrator | 2025-05-25 04:16:47.931049 | orchestrator | 2025-05-25 04:16:47.932775 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:16:47.933058 | orchestrator | Sunday 25 May 2025 04:16:47 +0000 (0:00:03.460) 0:00:54.233 ************ 2025-05-25 04:16:47.934143 | orchestrator | =============================================================================== 2025-05-25 04:16:47.935614 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.33s 2025-05-25 04:16:47.936130 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.23s 2025-05-25 04:16:47.937160 | orchestrator | Get volume type local --------------------------------------------------- 7.08s 2025-05-25 04:16:47.938007 | orchestrator | Create volume type local ------------------------------------------------ 6.47s 2025-05-25 04:16:47.938777 | orchestrator | Set public network to default ------------------------------------------- 5.90s 2025-05-25 04:16:47.939499 | orchestrator | Create public network --------------------------------------------------- 5.11s 2025-05-25 04:16:47.940388 | orchestrator | Create public subnet ---------------------------------------------------- 4.05s 2025-05-25 04:16:47.941041 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.71s 2025-05-25 04:16:47.941586 | orchestrator | Create manager role ----------------------------------------------------- 3.46s 2025-05-25 04:16:47.941928 | orchestrator | Gathering Facts --------------------------------------------------------- 1.83s 2025-05-25 04:16:50.125059 | orchestrator | 2025-05-25 04:16:50 | INFO  | It takes a moment until task 8d84955b-e41f-4576-8200-37b32990372d (image-manager) has been started and output is visible here. 2025-05-25 04:16:53.557489 | orchestrator | 2025-05-25 04:16:53 | INFO  | Processing image 'Cirros 0.6.2' 2025-05-25 04:16:53.767963 | orchestrator | 2025-05-25 04:16:53 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-05-25 04:16:53.771142 | orchestrator | 2025-05-25 04:16:53 | INFO  | Importing image Cirros 0.6.2 2025-05-25 04:16:53.771533 | orchestrator | 2025-05-25 04:16:53 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-05-25 04:16:55.421815 | orchestrator | 2025-05-25 04:16:55 | INFO  | Waiting for image to leave queued state... 2025-05-25 04:16:57.469436 | orchestrator | 2025-05-25 04:16:57 | INFO  | Waiting for import to complete... 2025-05-25 04:17:07.780546 | orchestrator | 2025-05-25 04:17:07 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-05-25 04:17:07.968841 | orchestrator | 2025-05-25 04:17:07 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-05-25 04:17:07.968942 | orchestrator | 2025-05-25 04:17:07 | INFO  | Setting internal_version = 0.6.2 2025-05-25 04:17:07.968957 | orchestrator | 2025-05-25 04:17:07 | INFO  | Setting image_original_user = cirros 2025-05-25 04:17:07.969226 | orchestrator | 2025-05-25 04:17:07 | INFO  | Adding tag os:cirros 2025-05-25 04:17:08.198979 | orchestrator | 2025-05-25 04:17:08 | INFO  | Setting property architecture: x86_64 2025-05-25 04:17:08.490523 | orchestrator | 2025-05-25 04:17:08 | INFO  | Setting property hw_disk_bus: scsi 2025-05-25 04:17:08.704524 | orchestrator | 2025-05-25 04:17:08 | INFO  | Setting property hw_rng_model: virtio 2025-05-25 04:17:08.911211 | orchestrator | 2025-05-25 04:17:08 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-25 04:17:09.124415 | orchestrator | 2025-05-25 04:17:09 | INFO  | Setting property hw_watchdog_action: reset 2025-05-25 04:17:09.333149 | orchestrator | 2025-05-25 04:17:09 | INFO  | Setting property hypervisor_type: qemu 2025-05-25 04:17:09.545241 | orchestrator | 2025-05-25 04:17:09 | INFO  | Setting property os_distro: cirros 2025-05-25 04:17:09.770315 | orchestrator | 2025-05-25 04:17:09 | INFO  | Setting property replace_frequency: never 2025-05-25 04:17:09.976868 | orchestrator | 2025-05-25 04:17:09 | INFO  | Setting property uuid_validity: none 2025-05-25 04:17:10.197428 | orchestrator | 2025-05-25 04:17:10 | INFO  | Setting property provided_until: none 2025-05-25 04:17:10.442586 | orchestrator | 2025-05-25 04:17:10 | INFO  | Setting property image_description: Cirros 2025-05-25 04:17:10.633061 | orchestrator | 2025-05-25 04:17:10 | INFO  | Setting property image_name: Cirros 2025-05-25 04:17:10.820551 | orchestrator | 2025-05-25 04:17:10 | INFO  | Setting property internal_version: 0.6.2 2025-05-25 04:17:11.058551 | orchestrator | 2025-05-25 04:17:11 | INFO  | Setting property image_original_user: cirros 2025-05-25 04:17:11.246585 | orchestrator | 2025-05-25 04:17:11 | INFO  | Setting property os_version: 0.6.2 2025-05-25 04:17:11.425840 | orchestrator | 2025-05-25 04:17:11 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-05-25 04:17:11.639783 | orchestrator | 2025-05-25 04:17:11 | INFO  | Setting property image_build_date: 2023-05-30 2025-05-25 04:17:11.857544 | orchestrator | 2025-05-25 04:17:11 | INFO  | Checking status of 'Cirros 0.6.2' 2025-05-25 04:17:11.857705 | orchestrator | 2025-05-25 04:17:11 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-05-25 04:17:11.859380 | orchestrator | 2025-05-25 04:17:11 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-05-25 04:17:12.035412 | orchestrator | 2025-05-25 04:17:12 | INFO  | Processing image 'Cirros 0.6.3' 2025-05-25 04:17:12.228945 | orchestrator | 2025-05-25 04:17:12 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-05-25 04:17:12.229283 | orchestrator | 2025-05-25 04:17:12 | INFO  | Importing image Cirros 0.6.3 2025-05-25 04:17:12.230175 | orchestrator | 2025-05-25 04:17:12 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-05-25 04:17:13.342811 | orchestrator | 2025-05-25 04:17:13 | INFO  | Waiting for image to leave queued state... 2025-05-25 04:17:15.373193 | orchestrator | 2025-05-25 04:17:15 | INFO  | Waiting for import to complete... 2025-05-25 04:17:25.685129 | orchestrator | 2025-05-25 04:17:25 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-05-25 04:17:25.963176 | orchestrator | 2025-05-25 04:17:25 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-05-25 04:17:25.963392 | orchestrator | 2025-05-25 04:17:25 | INFO  | Setting internal_version = 0.6.3 2025-05-25 04:17:25.963952 | orchestrator | 2025-05-25 04:17:25 | INFO  | Setting image_original_user = cirros 2025-05-25 04:17:25.964312 | orchestrator | 2025-05-25 04:17:25 | INFO  | Adding tag os:cirros 2025-05-25 04:17:26.248418 | orchestrator | 2025-05-25 04:17:26 | INFO  | Setting property architecture: x86_64 2025-05-25 04:17:26.440880 | orchestrator | 2025-05-25 04:17:26 | INFO  | Setting property hw_disk_bus: scsi 2025-05-25 04:17:26.653854 | orchestrator | 2025-05-25 04:17:26 | INFO  | Setting property hw_rng_model: virtio 2025-05-25 04:17:26.871424 | orchestrator | 2025-05-25 04:17:26 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-25 04:17:27.092844 | orchestrator | 2025-05-25 04:17:27 | INFO  | Setting property hw_watchdog_action: reset 2025-05-25 04:17:27.263217 | orchestrator | 2025-05-25 04:17:27 | INFO  | Setting property hypervisor_type: qemu 2025-05-25 04:17:27.444991 | orchestrator | 2025-05-25 04:17:27 | INFO  | Setting property os_distro: cirros 2025-05-25 04:17:27.649367 | orchestrator | 2025-05-25 04:17:27 | INFO  | Setting property replace_frequency: never 2025-05-25 04:17:27.842801 | orchestrator | 2025-05-25 04:17:27 | INFO  | Setting property uuid_validity: none 2025-05-25 04:17:28.027892 | orchestrator | 2025-05-25 04:17:28 | INFO  | Setting property provided_until: none 2025-05-25 04:17:28.418247 | orchestrator | 2025-05-25 04:17:28 | INFO  | Setting property image_description: Cirros 2025-05-25 04:17:28.607869 | orchestrator | 2025-05-25 04:17:28 | INFO  | Setting property image_name: Cirros 2025-05-25 04:17:28.852653 | orchestrator | 2025-05-25 04:17:28 | INFO  | Setting property internal_version: 0.6.3 2025-05-25 04:17:29.146650 | orchestrator | 2025-05-25 04:17:29 | INFO  | Setting property image_original_user: cirros 2025-05-25 04:17:29.360673 | orchestrator | 2025-05-25 04:17:29 | INFO  | Setting property os_version: 0.6.3 2025-05-25 04:17:29.555513 | orchestrator | 2025-05-25 04:17:29 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-05-25 04:17:29.762997 | orchestrator | 2025-05-25 04:17:29 | INFO  | Setting property image_build_date: 2024-09-26 2025-05-25 04:17:29.970405 | orchestrator | 2025-05-25 04:17:29 | INFO  | Checking status of 'Cirros 0.6.3' 2025-05-25 04:17:29.971106 | orchestrator | 2025-05-25 04:17:29 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-05-25 04:17:29.972185 | orchestrator | 2025-05-25 04:17:29 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-05-25 04:17:30.985812 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-05-25 04:17:32.870124 | orchestrator | 2025-05-25 04:17:32 | INFO  | date: 2025-05-25 2025-05-25 04:17:32.870255 | orchestrator | 2025-05-25 04:17:32 | INFO  | image: octavia-amphora-haproxy-2024.2.20250525.qcow2 2025-05-25 04:17:32.870272 | orchestrator | 2025-05-25 04:17:32 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250525.qcow2 2025-05-25 04:17:32.870435 | orchestrator | 2025-05-25 04:17:32 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250525.qcow2.CHECKSUM 2025-05-25 04:17:32.904571 | orchestrator | 2025-05-25 04:17:32 | INFO  | checksum: dfd6a126dc5e8611634a748cbee25e49048e05f13d763463c17af642c5f98ec7 2025-05-25 04:17:32.979193 | orchestrator | 2025-05-25 04:17:32 | INFO  | It takes a moment until task 91ea3685-1a80-4f8c-9264-3c1fa7cab8e4 (image-manager) has been started and output is visible here. 2025-05-25 04:17:35.292226 | orchestrator | 2025-05-25 04:17:35 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-05-25' 2025-05-25 04:17:35.315362 | orchestrator | 2025-05-25 04:17:35 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250525.qcow2: 200 2025-05-25 04:17:35.315736 | orchestrator | 2025-05-25 04:17:35 | INFO  | Importing image OpenStack Octavia Amphora 2025-05-25 2025-05-25 04:17:35.316615 | orchestrator | 2025-05-25 04:17:35 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250525.qcow2 2025-05-25 04:17:35.692790 | orchestrator | 2025-05-25 04:17:35 | INFO  | Waiting for image to leave queued state... 2025-05-25 04:17:37.736948 | orchestrator | 2025-05-25 04:17:37 | INFO  | Waiting for import to complete... 2025-05-25 04:17:47.830982 | orchestrator | 2025-05-25 04:17:47 | INFO  | Waiting for import to complete... 2025-05-25 04:17:57.936570 | orchestrator | 2025-05-25 04:17:57 | INFO  | Waiting for import to complete... 2025-05-25 04:18:08.046656 | orchestrator | 2025-05-25 04:18:08 | INFO  | Waiting for import to complete... 2025-05-25 04:18:18.142869 | orchestrator | 2025-05-25 04:18:18 | INFO  | Waiting for import to complete... 2025-05-25 04:18:28.454774 | orchestrator | 2025-05-25 04:18:28 | INFO  | Import of 'OpenStack Octavia Amphora 2025-05-25' successfully completed, reloading images 2025-05-25 04:18:28.773965 | orchestrator | 2025-05-25 04:18:28 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-05-25' 2025-05-25 04:18:28.774893 | orchestrator | 2025-05-25 04:18:28 | INFO  | Setting internal_version = 2025-05-25 2025-05-25 04:18:28.775945 | orchestrator | 2025-05-25 04:18:28 | INFO  | Setting image_original_user = ubuntu 2025-05-25 04:18:28.776689 | orchestrator | 2025-05-25 04:18:28 | INFO  | Adding tag amphora 2025-05-25 04:18:28.993553 | orchestrator | 2025-05-25 04:18:28 | INFO  | Adding tag os:ubuntu 2025-05-25 04:18:29.218936 | orchestrator | 2025-05-25 04:18:29 | INFO  | Setting property architecture: x86_64 2025-05-25 04:18:29.431686 | orchestrator | 2025-05-25 04:18:29 | INFO  | Setting property hw_disk_bus: scsi 2025-05-25 04:18:29.636963 | orchestrator | 2025-05-25 04:18:29 | INFO  | Setting property hw_rng_model: virtio 2025-05-25 04:18:29.826995 | orchestrator | 2025-05-25 04:18:29 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-25 04:18:30.008290 | orchestrator | 2025-05-25 04:18:30 | INFO  | Setting property hw_watchdog_action: reset 2025-05-25 04:18:30.197231 | orchestrator | 2025-05-25 04:18:30 | INFO  | Setting property hypervisor_type: qemu 2025-05-25 04:18:30.409791 | orchestrator | 2025-05-25 04:18:30 | INFO  | Setting property os_distro: ubuntu 2025-05-25 04:18:30.630185 | orchestrator | 2025-05-25 04:18:30 | INFO  | Setting property replace_frequency: quarterly 2025-05-25 04:18:30.848730 | orchestrator | 2025-05-25 04:18:30 | INFO  | Setting property uuid_validity: last-1 2025-05-25 04:18:31.086737 | orchestrator | 2025-05-25 04:18:31 | INFO  | Setting property provided_until: none 2025-05-25 04:18:31.274393 | orchestrator | 2025-05-25 04:18:31 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-05-25 04:18:31.478774 | orchestrator | 2025-05-25 04:18:31 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-05-25 04:18:31.685059 | orchestrator | 2025-05-25 04:18:31 | INFO  | Setting property internal_version: 2025-05-25 2025-05-25 04:18:31.889601 | orchestrator | 2025-05-25 04:18:31 | INFO  | Setting property image_original_user: ubuntu 2025-05-25 04:18:32.095963 | orchestrator | 2025-05-25 04:18:32 | INFO  | Setting property os_version: 2025-05-25 2025-05-25 04:18:32.293725 | orchestrator | 2025-05-25 04:18:32 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250525.qcow2 2025-05-25 04:18:32.487126 | orchestrator | 2025-05-25 04:18:32 | INFO  | Setting property image_build_date: 2025-05-25 2025-05-25 04:18:32.693806 | orchestrator | 2025-05-25 04:18:32 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-05-25' 2025-05-25 04:18:32.694486 | orchestrator | 2025-05-25 04:18:32 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-05-25' 2025-05-25 04:18:32.866081 | orchestrator | 2025-05-25 04:18:32 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-05-25 04:18:32.867518 | orchestrator | 2025-05-25 04:18:32 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-05-25 04:18:32.870080 | orchestrator | 2025-05-25 04:18:32 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-05-25 04:18:32.870630 | orchestrator | 2025-05-25 04:18:32 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-05-25 04:18:33.590633 | orchestrator | ok: Runtime: 0:02:55.462437 2025-05-25 04:18:33.615064 | 2025-05-25 04:18:33.615208 | TASK [Run checks] 2025-05-25 04:18:34.408652 | orchestrator | + set -e 2025-05-25 04:18:34.408857 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-25 04:18:34.408881 | orchestrator | ++ export INTERACTIVE=false 2025-05-25 04:18:34.408902 | orchestrator | ++ INTERACTIVE=false 2025-05-25 04:18:34.408916 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-25 04:18:34.408928 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-25 04:18:34.408942 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-05-25 04:18:34.409670 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-05-25 04:18:34.416734 | orchestrator | 2025-05-25 04:18:34.416873 | orchestrator | # CHECK 2025-05-25 04:18:34.416905 | orchestrator | 2025-05-25 04:18:34.416932 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-25 04:18:34.416965 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-25 04:18:34.416990 | orchestrator | + echo 2025-05-25 04:18:34.417014 | orchestrator | + echo '# CHECK' 2025-05-25 04:18:34.417035 | orchestrator | + echo 2025-05-25 04:18:34.417063 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-25 04:18:34.417270 | orchestrator | ++ semver latest 5.0.0 2025-05-25 04:18:34.478213 | orchestrator | 2025-05-25 04:18:34.478339 | orchestrator | ## Containers @ testbed-manager 2025-05-25 04:18:34.478354 | orchestrator | 2025-05-25 04:18:34.478392 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-25 04:18:34.478402 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-25 04:18:34.478412 | orchestrator | + echo 2025-05-25 04:18:34.478423 | orchestrator | + echo '## Containers @ testbed-manager' 2025-05-25 04:18:34.478433 | orchestrator | + echo 2025-05-25 04:18:34.478443 | orchestrator | + osism container testbed-manager ps 2025-05-25 04:18:36.556637 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-25 04:18:36.556833 | orchestrator | a236876c78c9 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_blackbox_exporter 2025-05-25 04:18:36.556877 | orchestrator | d8efaf77f44d registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2025-05-25 04:18:36.556900 | orchestrator | 89c6ffe3b375 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-05-25 04:18:36.556913 | orchestrator | b60d70841ddd registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-05-25 04:18:36.556927 | orchestrator | 9770b6da5ede registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2025-05-25 04:18:36.556939 | orchestrator | 67153e694cc9 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes cephclient 2025-05-25 04:18:36.556947 | orchestrator | 06916883e67a registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-05-25 04:18:36.556955 | orchestrator | 724bab32856a registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-05-25 04:18:36.556962 | orchestrator | aa0d13408a14 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 28 minutes (healthy) 80/tcp phpmyadmin 2025-05-25 04:18:36.556989 | orchestrator | 30b9a41b1326 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-05-25 04:18:36.556997 | orchestrator | de9727cf5dbc registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 30 minutes ago Up 29 minutes openstackclient 2025-05-25 04:18:36.557005 | orchestrator | e5ff847b80de registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 30 minutes ago Up 29 minutes (healthy) 8080/tcp homer 2025-05-25 04:18:36.557012 | orchestrator | 2995d445b3bc registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 49 minutes ago Up 48 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-05-25 04:18:36.557020 | orchestrator | 9752acbbf989 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 52 minutes ago Up 51 minutes (healthy) manager-inventory_reconciler-1 2025-05-25 04:18:36.557027 | orchestrator | 7b618b1fb1cf registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 52 minutes ago Up 52 minutes (healthy) osism-kubernetes 2025-05-25 04:18:36.557052 | orchestrator | 7cb09a9762ca registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 52 minutes ago Up 52 minutes (healthy) kolla-ansible 2025-05-25 04:18:36.557065 | orchestrator | 2eaae25c9cc6 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 52 minutes ago Up 52 minutes (healthy) osism-ansible 2025-05-25 04:18:36.557072 | orchestrator | 1de159a4ac8f registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 52 minutes ago Up 52 minutes (healthy) ceph-ansible 2025-05-25 04:18:36.557080 | orchestrator | 113be76ff4f7 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 52 minutes ago Up 52 minutes (healthy) 8000/tcp manager-ara-server-1 2025-05-25 04:18:36.557087 | orchestrator | 3c1f13c8dc79 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 52 minutes ago Up 52 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-05-25 04:18:36.557094 | orchestrator | 7747419ef7e4 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 52 minutes ago Up 52 minutes (healthy) manager-conductor-1 2025-05-25 04:18:36.557102 | orchestrator | 7465f9259b47 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 52 minutes ago Up 52 minutes (healthy) manager-netbox-1 2025-05-25 04:18:36.557109 | orchestrator | d7848a738a84 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" 52 minutes ago Up 52 minutes (healthy) 6379/tcp manager-redis-1 2025-05-25 04:18:36.557116 | orchestrator | 6df1efc564d9 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 52 minutes ago Up 52 minutes (healthy) manager-listener-1 2025-05-25 04:18:36.557130 | orchestrator | d33074545702 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 52 minutes ago Up 52 minutes (healthy) 3306/tcp manager-mariadb-1 2025-05-25 04:18:36.557137 | orchestrator | ed5a235edf47 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 52 minutes ago Up 52 minutes (healthy) manager-flower-1 2025-05-25 04:18:36.557145 | orchestrator | 06e3163d1ff3 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 52 minutes ago Up 52 minutes (healthy) manager-openstack-1 2025-05-25 04:18:36.557152 | orchestrator | 8e3b5b13823b registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 52 minutes ago Up 52 minutes (healthy) manager-beat-1 2025-05-25 04:18:36.557159 | orchestrator | 40bcf5486505 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 52 minutes ago Up 52 minutes (healthy) osismclient 2025-05-25 04:18:36.557167 | orchestrator | f6c7cc82a118 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 52 minutes ago Up 52 minutes (healthy) manager-watchdog-1 2025-05-25 04:18:36.557177 | orchestrator | 260752e51b03 registry.osism.tech/osism/netbox:v4.2.2 "/opt/netbox/venv/bi…" 59 minutes ago Up 54 minutes (healthy) netbox-netbox-worker-1 2025-05-25 04:18:36.557185 | orchestrator | 7517ccd0a4aa registry.osism.tech/osism/netbox:v4.2.2 "/usr/bin/tini -- /o…" 59 minutes ago Up 58 minutes (healthy) netbox-netbox-1 2025-05-25 04:18:36.557199 | orchestrator | b02761f769e6 registry.osism.tech/dockerhub/library/postgres:16.9-alpine "docker-entrypoint.s…" 59 minutes ago Up 58 minutes (healthy) 5432/tcp netbox-postgres-1 2025-05-25 04:18:36.557206 | orchestrator | 1eaf45b8d972 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" 59 minutes ago Up 58 minutes (healthy) 6379/tcp netbox-redis-1 2025-05-25 04:18:36.557214 | orchestrator | 68bf036ed494 registry.osism.tech/dockerhub/library/traefik:v3.4.0 "/entrypoint.sh trae…" 59 minutes ago Up 59 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-05-25 04:18:36.783608 | orchestrator | 2025-05-25 04:18:36.783700 | orchestrator | ## Images @ testbed-manager 2025-05-25 04:18:36.783711 | orchestrator | 2025-05-25 04:18:36.783718 | orchestrator | + echo 2025-05-25 04:18:36.783726 | orchestrator | + echo '## Images @ testbed-manager' 2025-05-25 04:18:36.783733 | orchestrator | + echo 2025-05-25 04:18:36.783741 | orchestrator | + osism container testbed-manager images 2025-05-25 04:18:38.809760 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-25 04:18:38.809875 | orchestrator | registry.osism.tech/osism/homer v25.05.2 858367ca4ec4 50 minutes ago 11MB 2025-05-25 04:18:38.809890 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 de62687f33ed 51 minutes ago 225MB 2025-05-25 04:18:38.809901 | orchestrator | registry.osism.tech/osism/cephclient reef 5eabd2a766cb 53 minutes ago 453MB 2025-05-25 04:18:38.809938 | orchestrator | registry.osism.tech/kolla/cron 2024.2 58971d7378a2 3 hours ago 318MB 2025-05-25 04:18:38.809950 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 93e38a66c98d 3 hours ago 746MB 2025-05-25 04:18:38.809961 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 806bf28ba938 3 hours ago 628MB 2025-05-25 04:18:38.809972 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 8cb002218d0f 3 hours ago 891MB 2025-05-25 04:18:38.809982 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 896e109b6430 3 hours ago 410MB 2025-05-25 04:18:38.809993 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 2de930d5879e 3 hours ago 358MB 2025-05-25 04:18:38.810006 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 416d83578191 3 hours ago 456MB 2025-05-25 04:18:38.810056 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 5972698a0681 3 hours ago 360MB 2025-05-25 04:18:38.810070 | orchestrator | registry.osism.tech/osism/osism-ansible latest 1ab2239f4f60 4 hours ago 576MB 2025-05-25 04:18:38.810081 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 8a7318ecd640 4 hours ago 537MB 2025-05-25 04:18:38.810092 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 eaf659445abc 4 hours ago 573MB 2025-05-25 04:18:38.810103 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest e14e8dfa54ec 4 hours ago 1.2GB 2025-05-25 04:18:38.810114 | orchestrator | registry.osism.tech/osism/osism latest 342bab88481f 4 hours ago 295MB 2025-05-25 04:18:38.810124 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest ce0a529e3fc2 4 hours ago 306MB 2025-05-25 04:18:38.810135 | orchestrator | registry.osism.tech/dockerhub/library/postgres 16.9-alpine b56133b65cd3 2 weeks ago 275MB 2025-05-25 04:18:38.810146 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.0 79e66182ffbe 2 weeks ago 224MB 2025-05-25 04:18:38.810156 | orchestrator | registry.osism.tech/dockerhub/hashicorp/vault 1.19.3 272792d172e0 3 weeks ago 504MB 2025-05-25 04:18:38.810168 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.3-alpine 9a07b03a1871 4 weeks ago 41.4MB 2025-05-25 04:18:38.810179 | orchestrator | registry.osism.tech/osism/netbox v4.2.2 de0f89b61971 7 weeks ago 817MB 2025-05-25 04:18:38.810204 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 4815a3e162ea 3 months ago 328MB 2025-05-25 04:18:38.810215 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-05-25 04:18:38.810226 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 8 months ago 300MB 2025-05-25 04:18:38.810240 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 11 months ago 146MB 2025-05-25 04:18:39.049669 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-25 04:18:39.050669 | orchestrator | ++ semver latest 5.0.0 2025-05-25 04:18:39.093982 | orchestrator | 2025-05-25 04:18:39.094113 | orchestrator | ## Containers @ testbed-node-0 2025-05-25 04:18:39.094128 | orchestrator | 2025-05-25 04:18:39.094139 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-25 04:18:39.094150 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-25 04:18:39.094161 | orchestrator | + echo 2025-05-25 04:18:39.094172 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-05-25 04:18:39.094184 | orchestrator | + echo 2025-05-25 04:18:39.094206 | orchestrator | + osism container testbed-node-0 ps 2025-05-25 04:18:41.207279 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-25 04:18:41.207385 | orchestrator | de2454b277ee registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-25 04:18:41.207412 | orchestrator | 665c2d43c5fa registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-25 04:18:41.207419 | orchestrator | 1b90d114e115 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-05-25 04:18:41.207425 | orchestrator | 8be191d9c1eb registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-05-25 04:18:41.207432 | orchestrator | 5e0bbd3cf583 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-05-25 04:18:41.207438 | orchestrator | 3850960784d5 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 6 minutes (healthy) magnum_conductor 2025-05-25 04:18:41.207443 | orchestrator | afc25a0a76a4 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-05-25 04:18:41.207449 | orchestrator | 46cac9f7d29f registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-05-25 04:18:41.207455 | orchestrator | 82842ef8a0b9 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) designate_worker 2025-05-25 04:18:41.207460 | orchestrator | ee1c5471d781 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) placement_api 2025-05-25 04:18:41.207466 | orchestrator | 9f227413d8c2 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-05-25 04:18:41.207471 | orchestrator | 913f6fafece0 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-05-25 04:18:41.207477 | orchestrator | 79d8bda40444 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-05-25 04:18:41.207482 | orchestrator | de648c4094f2 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-05-25 04:18:41.207500 | orchestrator | e4e995168874 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-05-25 04:18:41.207505 | orchestrator | 241e2248a96d registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-05-25 04:18:41.207511 | orchestrator | d546a7111e56 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_backend_bind9 2025-05-25 04:18:41.207516 | orchestrator | 303ec5115089 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) nova_conductor 2025-05-25 04:18:41.207525 | orchestrator | 3b93057ddb90 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-05-25 04:18:41.207531 | orchestrator | 7564036e2fb9 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-05-25 04:18:41.207541 | orchestrator | b7e653e5ce24 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-05-25 04:18:41.207560 | orchestrator | 9c39d8851a3a registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-05-25 04:18:41.207566 | orchestrator | 727ff7ceb8c7 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-05-25 04:18:41.207571 | orchestrator | da947f2e03e9 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-05-25 04:18:41.207578 | orchestrator | e85fcef59ce0 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-05-25 04:18:41.207583 | orchestrator | bda95915fdac registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-05-25 04:18:41.207589 | orchestrator | 340e3205fd08 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-05-25 04:18:41.207594 | orchestrator | d101906e1c84 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-05-25 04:18:41.207600 | orchestrator | b19084cf525c registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-05-25 04:18:41.207612 | orchestrator | b14e64c79772 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-05-25 04:18:41.207618 | orchestrator | fd0404d83f34 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-05-25 04:18:41.207623 | orchestrator | b8c232758946 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2025-05-25 04:18:41.207629 | orchestrator | 79b35e6fbf08 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-05-25 04:18:41.207634 | orchestrator | d2dbd6a95ece registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2025-05-25 04:18:41.207640 | orchestrator | 4636e6b4ffbf registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-05-25 04:18:41.207645 | orchestrator | 09009cd6e0d9 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-05-25 04:18:41.207651 | orchestrator | f2f3f33163ab registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 18 minutes (healthy) mariadb 2025-05-25 04:18:41.207656 | orchestrator | a602e71ad748 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-05-25 04:18:41.207665 | orchestrator | 9fcbf182bd50 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-05-25 04:18:41.207671 | orchestrator | 8808ab45df9a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-0 2025-05-25 04:18:41.207680 | orchestrator | e47c1c128cf4 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-05-25 04:18:41.207686 | orchestrator | e3e96f580138 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-05-25 04:18:41.207691 | orchestrator | 2d7eed75c249 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-05-25 04:18:41.207697 | orchestrator | 7d43ae44f5dd registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2025-05-25 04:18:41.207711 | orchestrator | 6ce517d47e8e registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-05-25 04:18:41.207717 | orchestrator | fca71a1136ed registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2025-05-25 04:18:41.207722 | orchestrator | 836f1ac218e2 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-05-25 04:18:41.207728 | orchestrator | f9e65bfc28a6 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-0 2025-05-25 04:18:41.207733 | orchestrator | 244b154706fe registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-05-25 04:18:41.207739 | orchestrator | 9d7a129ab8d5 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-05-25 04:18:41.207744 | orchestrator | f6df5532af07 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-05-25 04:18:41.207749 | orchestrator | b3feb60b6ee8 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-05-25 04:18:41.207755 | orchestrator | ce0d511b9faf registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-05-25 04:18:41.207763 | orchestrator | 1f0f5f0ba24d registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2025-05-25 04:18:41.207768 | orchestrator | f2db1764b116 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-05-25 04:18:41.207774 | orchestrator | 55f3040a1319 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-05-25 04:18:41.207779 | orchestrator | 0093e8ed09f1 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-05-25 04:18:41.427595 | orchestrator | 2025-05-25 04:18:41.427731 | orchestrator | ## Images @ testbed-node-0 2025-05-25 04:18:41.427761 | orchestrator | 2025-05-25 04:18:41.427784 | orchestrator | + echo 2025-05-25 04:18:41.427806 | orchestrator | + echo '## Images @ testbed-node-0' 2025-05-25 04:18:41.427827 | orchestrator | + echo 2025-05-25 04:18:41.427849 | orchestrator | + osism container testbed-node-0 images 2025-05-25 04:18:43.525148 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-25 04:18:43.525281 | orchestrator | registry.osism.tech/osism/ceph-daemon reef f2b54d0844b7 55 minutes ago 1.27GB 2025-05-25 04:18:43.525340 | orchestrator | registry.osism.tech/kolla/cron 2024.2 58971d7378a2 3 hours ago 318MB 2025-05-25 04:18:43.525353 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 5fbdda89147d 3 hours ago 375MB 2025-05-25 04:18:43.525363 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 75089a8989c2 3 hours ago 417MB 2025-05-25 04:18:43.525374 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 93e38a66c98d 3 hours ago 746MB 2025-05-25 04:18:43.525385 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 e4d71f3426d0 3 hours ago 329MB 2025-05-25 04:18:43.525395 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 78c30b7a7702 3 hours ago 318MB 2025-05-25 04:18:43.525406 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 691eed5d7ff9 3 hours ago 326MB 2025-05-25 04:18:43.525416 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 16822c7a80ac 3 hours ago 1.01GB 2025-05-25 04:18:43.525427 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 806bf28ba938 3 hours ago 628MB 2025-05-25 04:18:43.525437 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 2977d2abdc58 3 hours ago 1.55GB 2025-05-25 04:18:43.525448 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 fd12f394bbe0 3 hours ago 1.59GB 2025-05-25 04:18:43.525458 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 bdef7ed8c3d9 3 hours ago 590MB 2025-05-25 04:18:43.525469 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 1931f358fc73 3 hours ago 324MB 2025-05-25 04:18:43.525479 | orchestrator | registry.osism.tech/kolla/redis 2024.2 6ed2e7650002 3 hours ago 324MB 2025-05-25 04:18:43.525489 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 5962221acfc6 3 hours ago 344MB 2025-05-25 04:18:43.525500 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 896e109b6430 3 hours ago 410MB 2025-05-25 04:18:43.525510 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 7b3419ba113e 3 hours ago 351MB 2025-05-25 04:18:43.525521 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 264d7fb5e064 3 hours ago 353MB 2025-05-25 04:18:43.525534 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 2de930d5879e 3 hours ago 358MB 2025-05-25 04:18:43.525545 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 8d0828f0a352 3 hours ago 1.21GB 2025-05-25 04:18:43.525555 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 f96b911491a9 3 hours ago 361MB 2025-05-25 04:18:43.525566 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a69b78127aab 3 hours ago 361MB 2025-05-25 04:18:43.525576 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 8b8178143ac4 3 hours ago 1.11GB 2025-05-25 04:18:43.525587 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 6dd0fdd5c6d9 3 hours ago 1.11GB 2025-05-25 04:18:43.525597 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 5c226626146a 3 hours ago 1.41GB 2025-05-25 04:18:43.525607 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 488efca38567 3 hours ago 1.41GB 2025-05-25 04:18:43.525618 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 31d158dcede9 3 hours ago 1.1GB 2025-05-25 04:18:43.525628 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 87b4af4ead98 3 hours ago 1.12GB 2025-05-25 04:18:43.525638 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 8f1b990fc4d1 3 hours ago 1.1GB 2025-05-25 04:18:43.525657 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 74c876a3489e 3 hours ago 1.12GB 2025-05-25 04:18:43.525668 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 e137033109ec 3 hours ago 1.1GB 2025-05-25 04:18:43.525680 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 9f345dc99314 3 hours ago 1.24GB 2025-05-25 04:18:43.525693 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 4baf7bc69b5a 3 hours ago 1.15GB 2025-05-25 04:18:43.525705 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 a21718435829 3 hours ago 1.04GB 2025-05-25 04:18:43.525718 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 91572c0b507f 3 hours ago 1.04GB 2025-05-25 04:18:43.525760 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 13dc36b9e070 3 hours ago 1.42GB 2025-05-25 04:18:43.525774 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 e17f34b1d21e 3 hours ago 1.29GB 2025-05-25 04:18:43.525786 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 6972cea43ee1 3 hours ago 1.29GB 2025-05-25 04:18:43.525800 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 2b0d6e28bf46 3 hours ago 1.29GB 2025-05-25 04:18:43.525811 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 ae2b57ccff5c 3 hours ago 1.05GB 2025-05-25 04:18:43.525822 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 4e438195f140 3 hours ago 1.06GB 2025-05-25 04:18:43.525832 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 dd5d75366ebe 3 hours ago 1.06GB 2025-05-25 04:18:43.525843 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 764605d9d48a 3 hours ago 1.05GB 2025-05-25 04:18:43.525853 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 f4baf1f3f8d2 3 hours ago 1.05GB 2025-05-25 04:18:43.525864 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 949730d4c408 3 hours ago 1.05GB 2025-05-25 04:18:43.525875 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 a20cd6ce313f 3 hours ago 1.19GB 2025-05-25 04:18:43.525885 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 fc1a62cee97b 3 hours ago 1.31GB 2025-05-25 04:18:43.525896 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 89cf50af4ede 3 hours ago 1.04GB 2025-05-25 04:18:43.525915 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 9f2ba83beaa9 3 hours ago 1.04GB 2025-05-25 04:18:43.525926 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 144b50821364 3 hours ago 1.04GB 2025-05-25 04:18:43.525937 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 e2529a93b5e4 3 hours ago 1.04GB 2025-05-25 04:18:43.525948 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 6ddb41125bff 3 hours ago 1.04GB 2025-05-25 04:18:43.525959 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 d40ce0227892 3 hours ago 1.06GB 2025-05-25 04:18:43.525969 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 a7074e4475c3 3 hours ago 1.06GB 2025-05-25 04:18:43.525980 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 26176e7f6d15 3 hours ago 1.06GB 2025-05-25 04:18:43.525990 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 3c62ad64b7be 3 hours ago 1.11GB 2025-05-25 04:18:43.526001 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 35272f0b757d 3 hours ago 1.11GB 2025-05-25 04:18:43.526012 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 1859a03777cf 3 hours ago 1.13GB 2025-05-25 04:18:43.526087 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 3458b6779157 3 hours ago 947MB 2025-05-25 04:18:43.526099 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 70432c491715 3 hours ago 946MB 2025-05-25 04:18:43.526109 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 3f40aba304ce 3 hours ago 947MB 2025-05-25 04:18:43.526120 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 55d25458df10 3 hours ago 946MB 2025-05-25 04:18:43.760133 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-25 04:18:43.760229 | orchestrator | ++ semver latest 5.0.0 2025-05-25 04:18:43.808104 | orchestrator | 2025-05-25 04:18:43.808186 | orchestrator | ## Containers @ testbed-node-1 2025-05-25 04:18:43.808200 | orchestrator | 2025-05-25 04:18:43.808211 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-25 04:18:43.808222 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-25 04:18:43.808233 | orchestrator | + echo 2025-05-25 04:18:43.808245 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-05-25 04:18:43.808256 | orchestrator | + echo 2025-05-25 04:18:43.808267 | orchestrator | + osism container testbed-node-1 ps 2025-05-25 04:18:45.989716 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-25 04:18:45.989830 | orchestrator | 022df6b3c53d registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-25 04:18:45.989846 | orchestrator | 4ca79b6474ee registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-25 04:18:45.989858 | orchestrator | 3f5d572843b6 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-05-25 04:18:45.989869 | orchestrator | b995cbe748bf registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-05-25 04:18:45.989880 | orchestrator | 8e3e70229bed registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-05-25 04:18:45.989896 | orchestrator | b6cc78ef7dbd registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2025-05-25 04:18:45.989908 | orchestrator | 3626ea5554cf registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) magnum_conductor 2025-05-25 04:18:45.989919 | orchestrator | 9184b720dc4d registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-05-25 04:18:45.989929 | orchestrator | a0d16e83f03b registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-05-25 04:18:45.989959 | orchestrator | a4cbf24be783 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-05-25 04:18:45.989971 | orchestrator | 515d30a29691 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-05-25 04:18:45.989982 | orchestrator | 8604d4e80c65 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-05-25 04:18:45.989993 | orchestrator | f30c68d18437 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-05-25 04:18:45.990081 | orchestrator | ab453ce138df registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-05-25 04:18:45.990095 | orchestrator | 6d31bcde1a27 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-05-25 04:18:45.990106 | orchestrator | 25f3a8ed748b registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-05-25 04:18:45.990117 | orchestrator | 7db8738be44d registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_backend_bind9 2025-05-25 04:18:45.990128 | orchestrator | 2e99e8941964 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) nova_conductor 2025-05-25 04:18:45.990138 | orchestrator | 3d622276f820 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-05-25 04:18:45.990149 | orchestrator | 0b1855420e4c registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-05-25 04:18:45.990160 | orchestrator | 0c3629545454 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-05-25 04:18:45.990190 | orchestrator | 72719abc5b02 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-05-25 04:18:45.990201 | orchestrator | f5c592c22c7a registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-05-25 04:18:45.990212 | orchestrator | 0bace8c4f2d3 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-05-25 04:18:45.990225 | orchestrator | 4dd3e208c474 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-05-25 04:18:45.990236 | orchestrator | 65d5ccda5e8c registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-05-25 04:18:45.990247 | orchestrator | 97fd2c325659 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-05-25 04:18:45.990260 | orchestrator | 4f7c65d92b93 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-05-25 04:18:45.990272 | orchestrator | 56a4e9869101 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-05-25 04:18:45.990284 | orchestrator | 92084375c534 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-05-25 04:18:45.990321 | orchestrator | c68951e7d074 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-05-25 04:18:45.990334 | orchestrator | a065b66ee500 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2025-05-25 04:18:45.990353 | orchestrator | b2e327b47f41 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-05-25 04:18:45.990375 | orchestrator | f4ca4dbdd634 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) horizon 2025-05-25 04:18:45.990388 | orchestrator | 893c9445f94f registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2025-05-25 04:18:45.990400 | orchestrator | 8b6310782129 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-05-25 04:18:45.990413 | orchestrator | 4cb2749dc307 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 18 minutes (healthy) opensearch_dashboards 2025-05-25 04:18:45.990425 | orchestrator | c20e5dd187b3 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-05-25 04:18:45.990438 | orchestrator | 2e8ac26d8f16 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2025-05-25 04:18:45.990451 | orchestrator | 534cf343c9d0 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-1 2025-05-25 04:18:45.990463 | orchestrator | 22f05b748859 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-05-25 04:18:45.990476 | orchestrator | b9d1466b75e7 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-05-25 04:18:45.990488 | orchestrator | c3d908d8518e registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-05-25 04:18:45.990500 | orchestrator | 68f7d8445bbe registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2025-05-25 04:18:45.990520 | orchestrator | 5cd424e218c3 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_sb_db 2025-05-25 04:18:45.990534 | orchestrator | 76da8db4e858 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_nb_db 2025-05-25 04:18:45.990546 | orchestrator | 1ee425e31841 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2025-05-25 04:18:45.990558 | orchestrator | 7118cb7638a8 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-05-25 04:18:45.990571 | orchestrator | adc79498cd02 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-1 2025-05-25 04:18:45.990583 | orchestrator | 1e0d14e39c93 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-05-25 04:18:45.990596 | orchestrator | ecd0dbe3342f registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-05-25 04:18:45.990608 | orchestrator | e930ffff7ab3 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-05-25 04:18:45.990620 | orchestrator | 3164b3cb1514 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-05-25 04:18:45.990637 | orchestrator | 98e64be3c87b registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2025-05-25 04:18:45.990648 | orchestrator | c88caa17cbef registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-05-25 04:18:45.990659 | orchestrator | b949a6bbd53a registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-05-25 04:18:45.990670 | orchestrator | 80cddb928f4b registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-05-25 04:18:46.226266 | orchestrator | 2025-05-25 04:18:46.226414 | orchestrator | ## Images @ testbed-node-1 2025-05-25 04:18:46.226431 | orchestrator | 2025-05-25 04:18:46.226443 | orchestrator | + echo 2025-05-25 04:18:46.226455 | orchestrator | + echo '## Images @ testbed-node-1' 2025-05-25 04:18:46.226467 | orchestrator | + echo 2025-05-25 04:18:46.226479 | orchestrator | + osism container testbed-node-1 images 2025-05-25 04:18:48.292991 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-25 04:18:48.293099 | orchestrator | registry.osism.tech/osism/ceph-daemon reef f2b54d0844b7 55 minutes ago 1.27GB 2025-05-25 04:18:48.293133 | orchestrator | registry.osism.tech/kolla/cron 2024.2 58971d7378a2 3 hours ago 318MB 2025-05-25 04:18:48.293146 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 5fbdda89147d 3 hours ago 375MB 2025-05-25 04:18:48.293157 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 75089a8989c2 3 hours ago 417MB 2025-05-25 04:18:48.293168 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 93e38a66c98d 3 hours ago 746MB 2025-05-25 04:18:48.293223 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 e4d71f3426d0 3 hours ago 329MB 2025-05-25 04:18:48.293245 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 78c30b7a7702 3 hours ago 318MB 2025-05-25 04:18:48.293266 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 691eed5d7ff9 3 hours ago 326MB 2025-05-25 04:18:48.293345 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 16822c7a80ac 3 hours ago 1.01GB 2025-05-25 04:18:48.293362 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 806bf28ba938 3 hours ago 628MB 2025-05-25 04:18:48.293373 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 2977d2abdc58 3 hours ago 1.55GB 2025-05-25 04:18:48.293384 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 fd12f394bbe0 3 hours ago 1.59GB 2025-05-25 04:18:48.293395 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 bdef7ed8c3d9 3 hours ago 590MB 2025-05-25 04:18:48.293406 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 1931f358fc73 3 hours ago 324MB 2025-05-25 04:18:48.293416 | orchestrator | registry.osism.tech/kolla/redis 2024.2 6ed2e7650002 3 hours ago 324MB 2025-05-25 04:18:48.293427 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 5962221acfc6 3 hours ago 344MB 2025-05-25 04:18:48.293438 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 896e109b6430 3 hours ago 410MB 2025-05-25 04:18:48.293449 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 7b3419ba113e 3 hours ago 351MB 2025-05-25 04:18:48.293460 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 264d7fb5e064 3 hours ago 353MB 2025-05-25 04:18:48.293470 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 2de930d5879e 3 hours ago 358MB 2025-05-25 04:18:48.293499 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 8d0828f0a352 3 hours ago 1.21GB 2025-05-25 04:18:48.293510 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 f96b911491a9 3 hours ago 361MB 2025-05-25 04:18:48.293523 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a69b78127aab 3 hours ago 361MB 2025-05-25 04:18:48.293536 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 5c226626146a 3 hours ago 1.41GB 2025-05-25 04:18:48.293548 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 488efca38567 3 hours ago 1.41GB 2025-05-25 04:18:48.293561 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 31d158dcede9 3 hours ago 1.1GB 2025-05-25 04:18:48.293573 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 87b4af4ead98 3 hours ago 1.12GB 2025-05-25 04:18:48.293585 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 8f1b990fc4d1 3 hours ago 1.1GB 2025-05-25 04:18:48.293597 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 74c876a3489e 3 hours ago 1.12GB 2025-05-25 04:18:48.293610 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 e137033109ec 3 hours ago 1.1GB 2025-05-25 04:18:48.293623 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 9f345dc99314 3 hours ago 1.24GB 2025-05-25 04:18:48.293636 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 4baf7bc69b5a 3 hours ago 1.15GB 2025-05-25 04:18:48.293648 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 13dc36b9e070 3 hours ago 1.42GB 2025-05-25 04:18:48.293661 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 e17f34b1d21e 3 hours ago 1.29GB 2025-05-25 04:18:48.293673 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 6972cea43ee1 3 hours ago 1.29GB 2025-05-25 04:18:48.293686 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 2b0d6e28bf46 3 hours ago 1.29GB 2025-05-25 04:18:48.293719 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 ae2b57ccff5c 3 hours ago 1.05GB 2025-05-25 04:18:48.293733 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 4e438195f140 3 hours ago 1.06GB 2025-05-25 04:18:48.293745 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 dd5d75366ebe 3 hours ago 1.06GB 2025-05-25 04:18:48.293758 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 764605d9d48a 3 hours ago 1.05GB 2025-05-25 04:18:48.293770 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 f4baf1f3f8d2 3 hours ago 1.05GB 2025-05-25 04:18:48.293782 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 949730d4c408 3 hours ago 1.05GB 2025-05-25 04:18:48.293794 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 a20cd6ce313f 3 hours ago 1.19GB 2025-05-25 04:18:48.293807 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 fc1a62cee97b 3 hours ago 1.31GB 2025-05-25 04:18:48.293819 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 89cf50af4ede 3 hours ago 1.04GB 2025-05-25 04:18:48.293831 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 d40ce0227892 3 hours ago 1.06GB 2025-05-25 04:18:48.293844 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 a7074e4475c3 3 hours ago 1.06GB 2025-05-25 04:18:48.293856 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 26176e7f6d15 3 hours ago 1.06GB 2025-05-25 04:18:48.293868 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 3c62ad64b7be 3 hours ago 1.11GB 2025-05-25 04:18:48.293895 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 35272f0b757d 3 hours ago 1.11GB 2025-05-25 04:18:48.293907 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 1859a03777cf 3 hours ago 1.13GB 2025-05-25 04:18:48.293918 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 3458b6779157 3 hours ago 947MB 2025-05-25 04:18:48.293929 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 70432c491715 3 hours ago 946MB 2025-05-25 04:18:48.293940 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 3f40aba304ce 3 hours ago 947MB 2025-05-25 04:18:48.293951 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 55d25458df10 3 hours ago 946MB 2025-05-25 04:18:48.533269 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-25 04:18:48.533450 | orchestrator | ++ semver latest 5.0.0 2025-05-25 04:18:48.591022 | orchestrator | 2025-05-25 04:18:48.591127 | orchestrator | ## Containers @ testbed-node-2 2025-05-25 04:18:48.591144 | orchestrator | 2025-05-25 04:18:48.591156 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-25 04:18:48.591167 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-25 04:18:48.591178 | orchestrator | + echo 2025-05-25 04:18:48.591190 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-05-25 04:18:48.591201 | orchestrator | + echo 2025-05-25 04:18:48.591212 | orchestrator | + osism container testbed-node-2 ps 2025-05-25 04:18:50.678726 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-25 04:18:50.678818 | orchestrator | b29947f700ae registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-25 04:18:50.678831 | orchestrator | a1827e2f525a registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-25 04:18:50.678841 | orchestrator | eca4d64ccbbb registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-05-25 04:18:50.678849 | orchestrator | 7c4b15cfffae registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-05-25 04:18:50.678857 | orchestrator | 253a35de08ee registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-05-25 04:18:50.678865 | orchestrator | 88c306ad3ab6 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2025-05-25 04:18:50.678874 | orchestrator | ed8329862e98 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-05-25 04:18:50.678881 | orchestrator | b3d7b8302ab0 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-05-25 04:18:50.678889 | orchestrator | 6ec627554f92 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-05-25 04:18:50.678897 | orchestrator | a4b86e016cc7 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-05-25 04:18:50.678905 | orchestrator | 1b2fba706b8e registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-05-25 04:18:50.678913 | orchestrator | a228951fb33c registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-05-25 04:18:50.678942 | orchestrator | 96442626c57d registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-05-25 04:18:50.678951 | orchestrator | 0a6e8c66cffb registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-05-25 04:18:50.678960 | orchestrator | c5efa7715994 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-05-25 04:18:50.678968 | orchestrator | af90541e4141 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-05-25 04:18:50.678976 | orchestrator | f4413bb6138c registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_backend_bind9 2025-05-25 04:18:50.678984 | orchestrator | ca5394082cc5 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 8 minutes (healthy) nova_conductor 2025-05-25 04:18:50.678991 | orchestrator | a9126f8232bf registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-05-25 04:18:50.678999 | orchestrator | 404257feabfd registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-05-25 04:18:50.679007 | orchestrator | d09f2e513dd0 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-05-25 04:18:50.679030 | orchestrator | af1f6cc13d74 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-05-25 04:18:50.679039 | orchestrator | bb9f931ff4bc registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-05-25 04:18:50.679048 | orchestrator | d7d5b1223248 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-05-25 04:18:50.679057 | orchestrator | 58dd49417632 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-05-25 04:18:50.679065 | orchestrator | dedf531fd557 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-05-25 04:18:50.679074 | orchestrator | 38ec0f0ad340 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-05-25 04:18:50.679097 | orchestrator | e7b1003a5a84 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-05-25 04:18:50.679106 | orchestrator | 73a82fe5c9b2 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-05-25 04:18:50.679114 | orchestrator | 2394f399b1e1 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-05-25 04:18:50.679123 | orchestrator | 67ae914052ba registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-05-25 04:18:50.679138 | orchestrator | 0b2d47e7c0f9 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 14 minutes ago Up 14 minutes ceph-mgr-testbed-node-2 2025-05-25 04:18:50.679146 | orchestrator | 2f4261fe5d6d registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-05-25 04:18:50.679154 | orchestrator | 54330e1444cf registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-05-25 04:18:50.679166 | orchestrator | d38f5ff30efa registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-05-25 04:18:50.679240 | orchestrator | 75ae2556f1c5 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-05-25 04:18:50.679251 | orchestrator | ba7c23d9de3b registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-05-25 04:18:50.679258 | orchestrator | 88e06419fdf1 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-05-25 04:18:50.679267 | orchestrator | ebf08b861103 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2025-05-25 04:18:50.679276 | orchestrator | c648b1344f8c registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-2 2025-05-25 04:18:50.679313 | orchestrator | 024d9fc438a4 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-05-25 04:18:50.679325 | orchestrator | a856fdd40199 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-05-25 04:18:50.679334 | orchestrator | 43f64615268b registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-05-25 04:18:50.679343 | orchestrator | 7923e7db7fc3 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2025-05-25 04:18:50.679360 | orchestrator | f6f4c8fa2cbe registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_sb_db 2025-05-25 04:18:50.679370 | orchestrator | 7dd3d6f06eda registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_nb_db 2025-05-25 04:18:50.679380 | orchestrator | 66f07e484f36 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2025-05-25 04:18:50.679390 | orchestrator | 55028bdc3ce0 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-05-25 04:18:50.679399 | orchestrator | 7642e6248279 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-2 2025-05-25 04:18:50.679408 | orchestrator | aa285f5e7e6e registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-05-25 04:18:50.679417 | orchestrator | 9dba2bb77d5b registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-05-25 04:18:50.679432 | orchestrator | fd2bdebced09 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-05-25 04:18:50.679442 | orchestrator | 779494eb04e2 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-05-25 04:18:50.679451 | orchestrator | 6a3789cb1b36 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-05-25 04:18:50.679460 | orchestrator | 15c43cde0dbe registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-05-25 04:18:50.679468 | orchestrator | 2cf89d2f6f66 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-05-25 04:18:50.679477 | orchestrator | c79a3b7a9c9d registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-05-25 04:18:50.921208 | orchestrator | 2025-05-25 04:18:50.921358 | orchestrator | ## Images @ testbed-node-2 2025-05-25 04:18:50.921375 | orchestrator | 2025-05-25 04:18:50.921386 | orchestrator | + echo 2025-05-25 04:18:50.921398 | orchestrator | + echo '## Images @ testbed-node-2' 2025-05-25 04:18:50.921410 | orchestrator | + echo 2025-05-25 04:18:50.921421 | orchestrator | + osism container testbed-node-2 images 2025-05-25 04:18:52.955463 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-25 04:18:52.955570 | orchestrator | registry.osism.tech/osism/ceph-daemon reef f2b54d0844b7 55 minutes ago 1.27GB 2025-05-25 04:18:52.955586 | orchestrator | registry.osism.tech/kolla/cron 2024.2 58971d7378a2 3 hours ago 318MB 2025-05-25 04:18:52.955597 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 5fbdda89147d 3 hours ago 375MB 2025-05-25 04:18:52.955608 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 75089a8989c2 3 hours ago 417MB 2025-05-25 04:18:52.955619 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 93e38a66c98d 3 hours ago 746MB 2025-05-25 04:18:52.955630 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 e4d71f3426d0 3 hours ago 329MB 2025-05-25 04:18:52.955640 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 78c30b7a7702 3 hours ago 318MB 2025-05-25 04:18:52.955651 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 691eed5d7ff9 3 hours ago 326MB 2025-05-25 04:18:52.955661 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 16822c7a80ac 3 hours ago 1.01GB 2025-05-25 04:18:52.955672 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 806bf28ba938 3 hours ago 628MB 2025-05-25 04:18:52.955683 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 2977d2abdc58 3 hours ago 1.55GB 2025-05-25 04:18:52.955712 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 fd12f394bbe0 3 hours ago 1.59GB 2025-05-25 04:18:52.955723 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 bdef7ed8c3d9 3 hours ago 590MB 2025-05-25 04:18:52.955734 | orchestrator | registry.osism.tech/kolla/redis 2024.2 6ed2e7650002 3 hours ago 324MB 2025-05-25 04:18:52.955745 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 1931f358fc73 3 hours ago 324MB 2025-05-25 04:18:52.955755 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 5962221acfc6 3 hours ago 344MB 2025-05-25 04:18:52.955766 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 896e109b6430 3 hours ago 410MB 2025-05-25 04:18:52.955795 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 7b3419ba113e 3 hours ago 351MB 2025-05-25 04:18:52.955806 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 264d7fb5e064 3 hours ago 353MB 2025-05-25 04:18:52.955816 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 2de930d5879e 3 hours ago 358MB 2025-05-25 04:18:52.955827 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 8d0828f0a352 3 hours ago 1.21GB 2025-05-25 04:18:52.955837 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 f96b911491a9 3 hours ago 361MB 2025-05-25 04:18:52.955848 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a69b78127aab 3 hours ago 361MB 2025-05-25 04:18:52.955859 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 5c226626146a 3 hours ago 1.41GB 2025-05-25 04:18:52.955870 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 488efca38567 3 hours ago 1.41GB 2025-05-25 04:18:52.955880 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 31d158dcede9 3 hours ago 1.1GB 2025-05-25 04:18:52.955891 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 87b4af4ead98 3 hours ago 1.12GB 2025-05-25 04:18:52.955901 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 8f1b990fc4d1 3 hours ago 1.1GB 2025-05-25 04:18:52.955912 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 74c876a3489e 3 hours ago 1.12GB 2025-05-25 04:18:52.955922 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 e137033109ec 3 hours ago 1.1GB 2025-05-25 04:18:52.955933 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 9f345dc99314 3 hours ago 1.24GB 2025-05-25 04:18:52.955945 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 4baf7bc69b5a 3 hours ago 1.15GB 2025-05-25 04:18:52.955957 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 13dc36b9e070 3 hours ago 1.42GB 2025-05-25 04:18:52.955970 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 e17f34b1d21e 3 hours ago 1.29GB 2025-05-25 04:18:52.955982 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 6972cea43ee1 3 hours ago 1.29GB 2025-05-25 04:18:52.955995 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 2b0d6e28bf46 3 hours ago 1.29GB 2025-05-25 04:18:52.956029 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 ae2b57ccff5c 3 hours ago 1.05GB 2025-05-25 04:18:52.956043 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 4e438195f140 3 hours ago 1.06GB 2025-05-25 04:18:52.956056 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 dd5d75366ebe 3 hours ago 1.06GB 2025-05-25 04:18:52.956067 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 764605d9d48a 3 hours ago 1.05GB 2025-05-25 04:18:52.956078 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 f4baf1f3f8d2 3 hours ago 1.05GB 2025-05-25 04:18:52.956088 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 949730d4c408 3 hours ago 1.05GB 2025-05-25 04:18:52.956099 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 a20cd6ce313f 3 hours ago 1.19GB 2025-05-25 04:18:52.956109 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 fc1a62cee97b 3 hours ago 1.31GB 2025-05-25 04:18:52.956120 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 89cf50af4ede 3 hours ago 1.04GB 2025-05-25 04:18:52.956130 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 d40ce0227892 3 hours ago 1.06GB 2025-05-25 04:18:52.956147 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 a7074e4475c3 3 hours ago 1.06GB 2025-05-25 04:18:52.956157 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 26176e7f6d15 3 hours ago 1.06GB 2025-05-25 04:18:52.956168 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 3c62ad64b7be 3 hours ago 1.11GB 2025-05-25 04:18:52.956178 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 35272f0b757d 3 hours ago 1.11GB 2025-05-25 04:18:52.956189 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 1859a03777cf 3 hours ago 1.13GB 2025-05-25 04:18:52.956199 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 3458b6779157 3 hours ago 947MB 2025-05-25 04:18:52.956210 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 70432c491715 3 hours ago 946MB 2025-05-25 04:18:52.956220 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 3f40aba304ce 3 hours ago 947MB 2025-05-25 04:18:52.956231 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 55d25458df10 3 hours ago 946MB 2025-05-25 04:18:53.192580 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-05-25 04:18:53.200188 | orchestrator | + set -e 2025-05-25 04:18:53.200258 | orchestrator | + source /opt/manager-vars.sh 2025-05-25 04:18:53.201273 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-25 04:18:53.201392 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-25 04:18:53.201403 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-25 04:18:53.201411 | orchestrator | ++ CEPH_VERSION=reef 2025-05-25 04:18:53.201418 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-25 04:18:53.201430 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-25 04:18:53.201438 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-25 04:18:53.201446 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-25 04:18:53.201454 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-25 04:18:53.201461 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-25 04:18:53.201468 | orchestrator | ++ export ARA=false 2025-05-25 04:18:53.201475 | orchestrator | ++ ARA=false 2025-05-25 04:18:53.201483 | orchestrator | ++ export TEMPEST=true 2025-05-25 04:18:53.201490 | orchestrator | ++ TEMPEST=true 2025-05-25 04:18:53.201497 | orchestrator | ++ export IS_ZUUL=true 2025-05-25 04:18:53.201504 | orchestrator | ++ IS_ZUUL=true 2025-05-25 04:18:53.201511 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.153 2025-05-25 04:18:53.201519 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.153 2025-05-25 04:18:53.201526 | orchestrator | ++ export EXTERNAL_API=false 2025-05-25 04:18:53.201533 | orchestrator | ++ EXTERNAL_API=false 2025-05-25 04:18:53.201540 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-25 04:18:53.201547 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-25 04:18:53.201554 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-25 04:18:53.201561 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-25 04:18:53.201568 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-25 04:18:53.201575 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-25 04:18:53.201582 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-25 04:18:53.201589 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-05-25 04:18:53.211806 | orchestrator | + set -e 2025-05-25 04:18:53.211881 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-25 04:18:53.211893 | orchestrator | ++ export INTERACTIVE=false 2025-05-25 04:18:53.211904 | orchestrator | ++ INTERACTIVE=false 2025-05-25 04:18:53.211914 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-25 04:18:53.211923 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-25 04:18:53.211933 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-05-25 04:18:53.212689 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-05-25 04:18:53.218990 | orchestrator | 2025-05-25 04:18:53.219073 | orchestrator | # Ceph status 2025-05-25 04:18:53.219086 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-25 04:18:53.219096 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-25 04:18:53.219105 | orchestrator | + echo 2025-05-25 04:18:53.219114 | orchestrator | + echo '# Ceph status' 2025-05-25 04:18:53.219122 | orchestrator | 2025-05-25 04:18:53.219131 | orchestrator | + echo 2025-05-25 04:18:53.219140 | orchestrator | + ceph -s 2025-05-25 04:18:53.826703 | orchestrator | cluster: 2025-05-25 04:18:53.826800 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-05-25 04:18:53.826836 | orchestrator | health: HEALTH_OK 2025-05-25 04:18:53.826848 | orchestrator | 2025-05-25 04:18:53.826858 | orchestrator | services: 2025-05-25 04:18:53.826867 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 26m) 2025-05-25 04:18:53.826878 | orchestrator | mgr: testbed-node-0(active, since 14m), standbys: testbed-node-1, testbed-node-2 2025-05-25 04:18:53.826890 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-05-25 04:18:53.826899 | orchestrator | osd: 6 osds: 6 up (since 22m), 6 in (since 23m) 2025-05-25 04:18:53.826909 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-05-25 04:18:53.826919 | orchestrator | 2025-05-25 04:18:53.826928 | orchestrator | data: 2025-05-25 04:18:53.826937 | orchestrator | volumes: 1/1 healthy 2025-05-25 04:18:53.826947 | orchestrator | pools: 14 pools, 401 pgs 2025-05-25 04:18:53.826957 | orchestrator | objects: 555 objects, 2.2 GiB 2025-05-25 04:18:53.826966 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-05-25 04:18:53.826976 | orchestrator | pgs: 401 active+clean 2025-05-25 04:18:53.826985 | orchestrator | 2025-05-25 04:18:53.886628 | orchestrator | 2025-05-25 04:18:53.886727 | orchestrator | # Ceph versions 2025-05-25 04:18:53.886741 | orchestrator | 2025-05-25 04:18:53.886753 | orchestrator | + echo 2025-05-25 04:18:53.886765 | orchestrator | + echo '# Ceph versions' 2025-05-25 04:18:53.886777 | orchestrator | + echo 2025-05-25 04:18:53.886788 | orchestrator | + ceph versions 2025-05-25 04:18:54.478623 | orchestrator | { 2025-05-25 04:18:54.478727 | orchestrator | "mon": { 2025-05-25 04:18:54.478744 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-25 04:18:54.478757 | orchestrator | }, 2025-05-25 04:18:54.478768 | orchestrator | "mgr": { 2025-05-25 04:18:54.478780 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-25 04:18:54.478790 | orchestrator | }, 2025-05-25 04:18:54.478801 | orchestrator | "osd": { 2025-05-25 04:18:54.478812 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-05-25 04:18:54.478823 | orchestrator | }, 2025-05-25 04:18:54.478833 | orchestrator | "mds": { 2025-05-25 04:18:54.478844 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-25 04:18:54.478855 | orchestrator | }, 2025-05-25 04:18:54.478866 | orchestrator | "rgw": { 2025-05-25 04:18:54.478877 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-25 04:18:54.478887 | orchestrator | }, 2025-05-25 04:18:54.478898 | orchestrator | "overall": { 2025-05-25 04:18:54.478910 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-05-25 04:18:54.478921 | orchestrator | } 2025-05-25 04:18:54.478931 | orchestrator | } 2025-05-25 04:18:54.522643 | orchestrator | 2025-05-25 04:18:54.522772 | orchestrator | # Ceph OSD tree 2025-05-25 04:18:54.522796 | orchestrator | 2025-05-25 04:18:54.522816 | orchestrator | + echo 2025-05-25 04:18:54.522863 | orchestrator | + echo '# Ceph OSD tree' 2025-05-25 04:18:54.522886 | orchestrator | + echo 2025-05-25 04:18:54.522906 | orchestrator | + ceph osd df tree 2025-05-25 04:18:55.063082 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-05-25 04:18:55.063200 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 425 MiB 113 GiB 5.91 1.00 - root default 2025-05-25 04:18:55.063215 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2025-05-25 04:18:55.063226 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.92 1.17 201 up osd.0 2025-05-25 04:18:55.063237 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1000 MiB 931 MiB 1 KiB 70 MiB 19 GiB 4.89 0.83 189 up osd.5 2025-05-25 04:18:55.063248 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-05-25 04:18:55.063259 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.83 1.15 192 up osd.2 2025-05-25 04:18:55.063269 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.0 GiB 955 MiB 1 KiB 70 MiB 19 GiB 5.01 0.85 200 up osd.3 2025-05-25 04:18:55.063369 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-05-25 04:18:55.063383 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 6.04 1.02 192 up osd.1 2025-05-25 04:18:55.063393 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.79 0.98 196 up osd.4 2025-05-25 04:18:55.063404 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 425 MiB 113 GiB 5.91 2025-05-25 04:18:55.063416 | orchestrator | MIN/MAX VAR: 0.83/1.17 STDDEV: 0.79 2025-05-25 04:18:55.105271 | orchestrator | 2025-05-25 04:18:55.105417 | orchestrator | # Ceph monitor status 2025-05-25 04:18:55.105433 | orchestrator | 2025-05-25 04:18:55.105445 | orchestrator | + echo 2025-05-25 04:18:55.105457 | orchestrator | + echo '# Ceph monitor status' 2025-05-25 04:18:55.105468 | orchestrator | + echo 2025-05-25 04:18:55.105479 | orchestrator | + ceph mon stat 2025-05-25 04:18:55.686125 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-05-25 04:18:55.737924 | orchestrator | 2025-05-25 04:18:55.738086 | orchestrator | # Ceph quorum status 2025-05-25 04:18:55.738104 | orchestrator | 2025-05-25 04:18:55.738116 | orchestrator | + echo 2025-05-25 04:18:55.738127 | orchestrator | + echo '# Ceph quorum status' 2025-05-25 04:18:55.738139 | orchestrator | + echo 2025-05-25 04:18:55.738647 | orchestrator | + ceph quorum_status 2025-05-25 04:18:55.738672 | orchestrator | + jq 2025-05-25 04:18:56.369267 | orchestrator | { 2025-05-25 04:18:56.369568 | orchestrator | "election_epoch": 8, 2025-05-25 04:18:56.369590 | orchestrator | "quorum": [ 2025-05-25 04:18:56.369602 | orchestrator | 0, 2025-05-25 04:18:56.369613 | orchestrator | 1, 2025-05-25 04:18:56.369623 | orchestrator | 2 2025-05-25 04:18:56.369634 | orchestrator | ], 2025-05-25 04:18:56.369645 | orchestrator | "quorum_names": [ 2025-05-25 04:18:56.369655 | orchestrator | "testbed-node-0", 2025-05-25 04:18:56.369666 | orchestrator | "testbed-node-1", 2025-05-25 04:18:56.369676 | orchestrator | "testbed-node-2" 2025-05-25 04:18:56.369687 | orchestrator | ], 2025-05-25 04:18:56.369698 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-05-25 04:18:56.369709 | orchestrator | "quorum_age": 1589, 2025-05-25 04:18:56.369720 | orchestrator | "features": { 2025-05-25 04:18:56.369731 | orchestrator | "quorum_con": "4540138322906710015", 2025-05-25 04:18:56.369741 | orchestrator | "quorum_mon": [ 2025-05-25 04:18:56.369752 | orchestrator | "kraken", 2025-05-25 04:18:56.369762 | orchestrator | "luminous", 2025-05-25 04:18:56.369774 | orchestrator | "mimic", 2025-05-25 04:18:56.369784 | orchestrator | "osdmap-prune", 2025-05-25 04:18:56.369795 | orchestrator | "nautilus", 2025-05-25 04:18:56.369805 | orchestrator | "octopus", 2025-05-25 04:18:56.369816 | orchestrator | "pacific", 2025-05-25 04:18:56.369826 | orchestrator | "elector-pinging", 2025-05-25 04:18:56.369836 | orchestrator | "quincy", 2025-05-25 04:18:56.369847 | orchestrator | "reef" 2025-05-25 04:18:56.369858 | orchestrator | ] 2025-05-25 04:18:56.369868 | orchestrator | }, 2025-05-25 04:18:56.369879 | orchestrator | "monmap": { 2025-05-25 04:18:56.369889 | orchestrator | "epoch": 1, 2025-05-25 04:18:56.369900 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-05-25 04:18:56.369911 | orchestrator | "modified": "2025-05-25T03:52:09.818895Z", 2025-05-25 04:18:56.369922 | orchestrator | "created": "2025-05-25T03:52:09.818895Z", 2025-05-25 04:18:56.369932 | orchestrator | "min_mon_release": 18, 2025-05-25 04:18:56.369943 | orchestrator | "min_mon_release_name": "reef", 2025-05-25 04:18:56.369953 | orchestrator | "election_strategy": 1, 2025-05-25 04:18:56.369964 | orchestrator | "disallowed_leaders: ": "", 2025-05-25 04:18:56.369974 | orchestrator | "stretch_mode": false, 2025-05-25 04:18:56.369985 | orchestrator | "tiebreaker_mon": "", 2025-05-25 04:18:56.369995 | orchestrator | "removed_ranks: ": "", 2025-05-25 04:18:56.370006 | orchestrator | "features": { 2025-05-25 04:18:56.370074 | orchestrator | "persistent": [ 2025-05-25 04:18:56.370087 | orchestrator | "kraken", 2025-05-25 04:18:56.370097 | orchestrator | "luminous", 2025-05-25 04:18:56.370108 | orchestrator | "mimic", 2025-05-25 04:18:56.370119 | orchestrator | "osdmap-prune", 2025-05-25 04:18:56.370130 | orchestrator | "nautilus", 2025-05-25 04:18:56.370165 | orchestrator | "octopus", 2025-05-25 04:18:56.370177 | orchestrator | "pacific", 2025-05-25 04:18:56.370190 | orchestrator | "elector-pinging", 2025-05-25 04:18:56.370203 | orchestrator | "quincy", 2025-05-25 04:18:56.370215 | orchestrator | "reef" 2025-05-25 04:18:56.370227 | orchestrator | ], 2025-05-25 04:18:56.370240 | orchestrator | "optional": [] 2025-05-25 04:18:56.370252 | orchestrator | }, 2025-05-25 04:18:56.370265 | orchestrator | "mons": [ 2025-05-25 04:18:56.370277 | orchestrator | { 2025-05-25 04:18:56.370312 | orchestrator | "rank": 0, 2025-05-25 04:18:56.370324 | orchestrator | "name": "testbed-node-0", 2025-05-25 04:18:56.370351 | orchestrator | "public_addrs": { 2025-05-25 04:18:56.370363 | orchestrator | "addrvec": [ 2025-05-25 04:18:56.370376 | orchestrator | { 2025-05-25 04:18:56.370388 | orchestrator | "type": "v2", 2025-05-25 04:18:56.370399 | orchestrator | "addr": "192.168.16.10:3300", 2025-05-25 04:18:56.370410 | orchestrator | "nonce": 0 2025-05-25 04:18:56.370420 | orchestrator | }, 2025-05-25 04:18:56.370431 | orchestrator | { 2025-05-25 04:18:56.370442 | orchestrator | "type": "v1", 2025-05-25 04:18:56.370452 | orchestrator | "addr": "192.168.16.10:6789", 2025-05-25 04:18:56.370463 | orchestrator | "nonce": 0 2025-05-25 04:18:56.370474 | orchestrator | } 2025-05-25 04:18:56.370484 | orchestrator | ] 2025-05-25 04:18:56.370495 | orchestrator | }, 2025-05-25 04:18:56.370505 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-05-25 04:18:56.370516 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-05-25 04:18:56.370527 | orchestrator | "priority": 0, 2025-05-25 04:18:56.370538 | orchestrator | "weight": 0, 2025-05-25 04:18:56.370548 | orchestrator | "crush_location": "{}" 2025-05-25 04:18:56.370559 | orchestrator | }, 2025-05-25 04:18:56.370570 | orchestrator | { 2025-05-25 04:18:56.370580 | orchestrator | "rank": 1, 2025-05-25 04:18:56.370591 | orchestrator | "name": "testbed-node-1", 2025-05-25 04:18:56.370602 | orchestrator | "public_addrs": { 2025-05-25 04:18:56.370612 | orchestrator | "addrvec": [ 2025-05-25 04:18:56.370623 | orchestrator | { 2025-05-25 04:18:56.370634 | orchestrator | "type": "v2", 2025-05-25 04:18:56.370644 | orchestrator | "addr": "192.168.16.11:3300", 2025-05-25 04:18:56.370655 | orchestrator | "nonce": 0 2025-05-25 04:18:56.370666 | orchestrator | }, 2025-05-25 04:18:56.370676 | orchestrator | { 2025-05-25 04:18:56.370687 | orchestrator | "type": "v1", 2025-05-25 04:18:56.370698 | orchestrator | "addr": "192.168.16.11:6789", 2025-05-25 04:18:56.370708 | orchestrator | "nonce": 0 2025-05-25 04:18:56.370719 | orchestrator | } 2025-05-25 04:18:56.370730 | orchestrator | ] 2025-05-25 04:18:56.370740 | orchestrator | }, 2025-05-25 04:18:56.370751 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-05-25 04:18:56.370762 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-05-25 04:18:56.370773 | orchestrator | "priority": 0, 2025-05-25 04:18:56.370788 | orchestrator | "weight": 0, 2025-05-25 04:18:56.370806 | orchestrator | "crush_location": "{}" 2025-05-25 04:18:56.370824 | orchestrator | }, 2025-05-25 04:18:56.370842 | orchestrator | { 2025-05-25 04:18:56.370861 | orchestrator | "rank": 2, 2025-05-25 04:18:56.370879 | orchestrator | "name": "testbed-node-2", 2025-05-25 04:18:56.370898 | orchestrator | "public_addrs": { 2025-05-25 04:18:56.370916 | orchestrator | "addrvec": [ 2025-05-25 04:18:56.370929 | orchestrator | { 2025-05-25 04:18:56.370939 | orchestrator | "type": "v2", 2025-05-25 04:18:56.370950 | orchestrator | "addr": "192.168.16.12:3300", 2025-05-25 04:18:56.370996 | orchestrator | "nonce": 0 2025-05-25 04:18:56.371015 | orchestrator | }, 2025-05-25 04:18:56.371035 | orchestrator | { 2025-05-25 04:18:56.371054 | orchestrator | "type": "v1", 2025-05-25 04:18:56.371071 | orchestrator | "addr": "192.168.16.12:6789", 2025-05-25 04:18:56.371090 | orchestrator | "nonce": 0 2025-05-25 04:18:56.371107 | orchestrator | } 2025-05-25 04:18:56.371126 | orchestrator | ] 2025-05-25 04:18:56.371143 | orchestrator | }, 2025-05-25 04:18:56.371162 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-05-25 04:18:56.371173 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-05-25 04:18:56.371184 | orchestrator | "priority": 0, 2025-05-25 04:18:56.371194 | orchestrator | "weight": 0, 2025-05-25 04:18:56.371204 | orchestrator | "crush_location": "{}" 2025-05-25 04:18:56.371215 | orchestrator | } 2025-05-25 04:18:56.371237 | orchestrator | ] 2025-05-25 04:18:56.371247 | orchestrator | } 2025-05-25 04:18:56.371258 | orchestrator | } 2025-05-25 04:18:56.371339 | orchestrator | 2025-05-25 04:18:56.371356 | orchestrator | # Ceph free space status 2025-05-25 04:18:56.371366 | orchestrator | 2025-05-25 04:18:56.371377 | orchestrator | + echo 2025-05-25 04:18:56.371388 | orchestrator | + echo '# Ceph free space status' 2025-05-25 04:18:56.371399 | orchestrator | + echo 2025-05-25 04:18:56.371410 | orchestrator | + ceph df 2025-05-25 04:18:56.959238 | orchestrator | --- RAW STORAGE --- 2025-05-25 04:18:56.959404 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-05-25 04:18:56.959436 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2025-05-25 04:18:56.959449 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2025-05-25 04:18:56.959460 | orchestrator | 2025-05-25 04:18:56.959472 | orchestrator | --- POOLS --- 2025-05-25 04:18:56.959483 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-05-25 04:18:56.959495 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-05-25 04:18:56.959506 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-05-25 04:18:56.959517 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-05-25 04:18:56.959527 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-05-25 04:18:56.959538 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-05-25 04:18:56.959549 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-05-25 04:18:56.959559 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2025-05-25 04:18:56.959570 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-05-25 04:18:56.959580 | orchestrator | .rgw.root 9 32 3.0 KiB 7 56 KiB 0 53 GiB 2025-05-25 04:18:56.959591 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-05-25 04:18:56.959601 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-05-25 04:18:56.959612 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.93 35 GiB 2025-05-25 04:18:56.959623 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-05-25 04:18:56.959633 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-05-25 04:18:57.013007 | orchestrator | ++ semver latest 5.0.0 2025-05-25 04:18:57.055121 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-25 04:18:57.055209 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-25 04:18:57.055224 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-05-25 04:18:57.055234 | orchestrator | + osism apply facts 2025-05-25 04:18:58.790755 | orchestrator | 2025-05-25 04:18:58 | INFO  | Task e698e0f9-4c69-475c-b492-3c2435aeb73f (facts) was prepared for execution. 2025-05-25 04:18:58.790858 | orchestrator | 2025-05-25 04:18:58 | INFO  | It takes a moment until task e698e0f9-4c69-475c-b492-3c2435aeb73f (facts) has been started and output is visible here. 2025-05-25 04:19:02.815205 | orchestrator | 2025-05-25 04:19:02.818760 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-25 04:19:02.818806 | orchestrator | 2025-05-25 04:19:02.818820 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-25 04:19:02.818832 | orchestrator | Sunday 25 May 2025 04:19:02 +0000 (0:00:00.261) 0:00:00.261 ************ 2025-05-25 04:19:03.429516 | orchestrator | ok: [testbed-manager] 2025-05-25 04:19:03.925391 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:03.925664 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:19:03.927045 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:19:03.928697 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:19:03.929064 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:19:03.929937 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:19:03.931065 | orchestrator | 2025-05-25 04:19:03.932730 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-25 04:19:03.933498 | orchestrator | Sunday 25 May 2025 04:19:03 +0000 (0:00:01.107) 0:00:01.368 ************ 2025-05-25 04:19:04.095891 | orchestrator | skipping: [testbed-manager] 2025-05-25 04:19:04.181807 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:04.262990 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:19:04.343421 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:19:04.421758 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:19:05.157972 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:19:05.160459 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:19:05.161438 | orchestrator | 2025-05-25 04:19:05.162801 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-25 04:19:05.164634 | orchestrator | 2025-05-25 04:19:05.165453 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-25 04:19:05.170210 | orchestrator | Sunday 25 May 2025 04:19:05 +0000 (0:00:01.235) 0:00:02.604 ************ 2025-05-25 04:19:10.268420 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:19:10.272906 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:10.273806 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:19:10.279137 | orchestrator | ok: [testbed-manager] 2025-05-25 04:19:10.279217 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:19:10.279237 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:19:10.280687 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:19:10.282866 | orchestrator | 2025-05-25 04:19:10.283205 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-25 04:19:10.283609 | orchestrator | 2025-05-25 04:19:10.284001 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-25 04:19:10.284666 | orchestrator | Sunday 25 May 2025 04:19:10 +0000 (0:00:05.111) 0:00:07.715 ************ 2025-05-25 04:19:10.437503 | orchestrator | skipping: [testbed-manager] 2025-05-25 04:19:10.518075 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:10.602331 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:19:10.682169 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:19:10.759181 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:19:10.803668 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:19:10.804973 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:19:10.806564 | orchestrator | 2025-05-25 04:19:10.807099 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:19:10.808475 | orchestrator | 2025-05-25 04:19:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 04:19:10.808505 | orchestrator | 2025-05-25 04:19:10 | INFO  | Please wait and do not abort execution. 2025-05-25 04:19:10.809769 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 04:19:10.811060 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 04:19:10.812013 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 04:19:10.813410 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 04:19:10.814224 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 04:19:10.815123 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 04:19:10.815895 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 04:19:10.816658 | orchestrator | 2025-05-25 04:19:10.817333 | orchestrator | 2025-05-25 04:19:10.818111 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:19:10.818817 | orchestrator | Sunday 25 May 2025 04:19:10 +0000 (0:00:00.536) 0:00:08.252 ************ 2025-05-25 04:19:10.819314 | orchestrator | =============================================================================== 2025-05-25 04:19:10.819927 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.11s 2025-05-25 04:19:10.820521 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2025-05-25 04:19:10.821873 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2025-05-25 04:19:10.822803 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-05-25 04:19:11.455826 | orchestrator | + osism validate ceph-mons 2025-05-25 04:19:32.360220 | orchestrator | 2025-05-25 04:19:32.360380 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-05-25 04:19:32.360400 | orchestrator | 2025-05-25 04:19:32.360412 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-25 04:19:32.360423 | orchestrator | Sunday 25 May 2025 04:19:17 +0000 (0:00:00.433) 0:00:00.433 ************ 2025-05-25 04:19:32.360435 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-25 04:19:32.360446 | orchestrator | 2025-05-25 04:19:32.360457 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-25 04:19:32.360468 | orchestrator | Sunday 25 May 2025 04:19:18 +0000 (0:00:00.649) 0:00:01.083 ************ 2025-05-25 04:19:32.360479 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-25 04:19:32.360490 | orchestrator | 2025-05-25 04:19:32.360501 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-25 04:19:32.360512 | orchestrator | Sunday 25 May 2025 04:19:18 +0000 (0:00:00.796) 0:00:01.880 ************ 2025-05-25 04:19:32.360523 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:32.360535 | orchestrator | 2025-05-25 04:19:32.360545 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-05-25 04:19:32.360572 | orchestrator | Sunday 25 May 2025 04:19:19 +0000 (0:00:00.233) 0:00:02.114 ************ 2025-05-25 04:19:32.360583 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:32.360594 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:19:32.360605 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:19:32.360616 | orchestrator | 2025-05-25 04:19:32.360627 | orchestrator | TASK [Get container info] ****************************************************** 2025-05-25 04:19:32.360638 | orchestrator | Sunday 25 May 2025 04:19:19 +0000 (0:00:00.295) 0:00:02.409 ************ 2025-05-25 04:19:32.360649 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:32.360660 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:19:32.360671 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:19:32.360681 | orchestrator | 2025-05-25 04:19:32.360692 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-05-25 04:19:32.360703 | orchestrator | Sunday 25 May 2025 04:19:20 +0000 (0:00:00.965) 0:00:03.375 ************ 2025-05-25 04:19:32.360715 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:32.360726 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:19:32.360737 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:19:32.360748 | orchestrator | 2025-05-25 04:19:32.360761 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-05-25 04:19:32.360774 | orchestrator | Sunday 25 May 2025 04:19:20 +0000 (0:00:00.279) 0:00:03.655 ************ 2025-05-25 04:19:32.360787 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:32.360801 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:19:32.360813 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:19:32.360825 | orchestrator | 2025-05-25 04:19:32.360838 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-25 04:19:32.360851 | orchestrator | Sunday 25 May 2025 04:19:21 +0000 (0:00:00.471) 0:00:04.127 ************ 2025-05-25 04:19:32.360863 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:32.360875 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:19:32.360887 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:19:32.360899 | orchestrator | 2025-05-25 04:19:32.360912 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-05-25 04:19:32.360944 | orchestrator | Sunday 25 May 2025 04:19:21 +0000 (0:00:00.295) 0:00:04.423 ************ 2025-05-25 04:19:32.360956 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:32.360967 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:19:32.360978 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:19:32.360989 | orchestrator | 2025-05-25 04:19:32.360999 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-05-25 04:19:32.361010 | orchestrator | Sunday 25 May 2025 04:19:21 +0000 (0:00:00.295) 0:00:04.718 ************ 2025-05-25 04:19:32.361021 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:32.361032 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:19:32.361043 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:19:32.361054 | orchestrator | 2025-05-25 04:19:32.361064 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-25 04:19:32.361075 | orchestrator | Sunday 25 May 2025 04:19:21 +0000 (0:00:00.273) 0:00:04.991 ************ 2025-05-25 04:19:32.361086 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:32.361097 | orchestrator | 2025-05-25 04:19:32.361108 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-25 04:19:32.361119 | orchestrator | Sunday 25 May 2025 04:19:22 +0000 (0:00:00.628) 0:00:05.620 ************ 2025-05-25 04:19:32.361130 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:32.361141 | orchestrator | 2025-05-25 04:19:32.361152 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-25 04:19:32.361163 | orchestrator | Sunday 25 May 2025 04:19:22 +0000 (0:00:00.250) 0:00:05.870 ************ 2025-05-25 04:19:32.361174 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:32.361184 | orchestrator | 2025-05-25 04:19:32.361195 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:19:32.361207 | orchestrator | Sunday 25 May 2025 04:19:23 +0000 (0:00:00.253) 0:00:06.124 ************ 2025-05-25 04:19:32.361218 | orchestrator | 2025-05-25 04:19:32.361229 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:19:32.361240 | orchestrator | Sunday 25 May 2025 04:19:23 +0000 (0:00:00.074) 0:00:06.198 ************ 2025-05-25 04:19:32.361251 | orchestrator | 2025-05-25 04:19:32.361262 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:19:32.361295 | orchestrator | Sunday 25 May 2025 04:19:23 +0000 (0:00:00.069) 0:00:06.268 ************ 2025-05-25 04:19:32.361307 | orchestrator | 2025-05-25 04:19:32.361318 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-25 04:19:32.361329 | orchestrator | Sunday 25 May 2025 04:19:23 +0000 (0:00:00.071) 0:00:06.339 ************ 2025-05-25 04:19:32.361340 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:32.361351 | orchestrator | 2025-05-25 04:19:32.361362 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-05-25 04:19:32.361373 | orchestrator | Sunday 25 May 2025 04:19:23 +0000 (0:00:00.232) 0:00:06.572 ************ 2025-05-25 04:19:32.361384 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:32.361395 | orchestrator | 2025-05-25 04:19:32.361422 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-05-25 04:19:32.361434 | orchestrator | Sunday 25 May 2025 04:19:23 +0000 (0:00:00.249) 0:00:06.822 ************ 2025-05-25 04:19:32.361445 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:32.361456 | orchestrator | 2025-05-25 04:19:32.361467 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-05-25 04:19:32.361485 | orchestrator | Sunday 25 May 2025 04:19:23 +0000 (0:00:00.115) 0:00:06.937 ************ 2025-05-25 04:19:32.361504 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:19:32.361525 | orchestrator | 2025-05-25 04:19:32.361552 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-05-25 04:19:32.361570 | orchestrator | Sunday 25 May 2025 04:19:25 +0000 (0:00:01.576) 0:00:08.514 ************ 2025-05-25 04:19:32.361588 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:32.361606 | orchestrator | 2025-05-25 04:19:32.361621 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-05-25 04:19:32.361655 | orchestrator | Sunday 25 May 2025 04:19:25 +0000 (0:00:00.334) 0:00:08.848 ************ 2025-05-25 04:19:32.361673 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:32.361690 | orchestrator | 2025-05-25 04:19:32.361709 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-05-25 04:19:32.361728 | orchestrator | Sunday 25 May 2025 04:19:26 +0000 (0:00:00.307) 0:00:09.155 ************ 2025-05-25 04:19:32.361747 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:32.361764 | orchestrator | 2025-05-25 04:19:32.361782 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-05-25 04:19:32.361800 | orchestrator | Sunday 25 May 2025 04:19:26 +0000 (0:00:00.313) 0:00:09.468 ************ 2025-05-25 04:19:32.361820 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:32.361837 | orchestrator | 2025-05-25 04:19:32.361856 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-05-25 04:19:32.361875 | orchestrator | Sunday 25 May 2025 04:19:26 +0000 (0:00:00.297) 0:00:09.766 ************ 2025-05-25 04:19:32.361890 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:32.361905 | orchestrator | 2025-05-25 04:19:32.361920 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-05-25 04:19:32.361936 | orchestrator | Sunday 25 May 2025 04:19:26 +0000 (0:00:00.113) 0:00:09.880 ************ 2025-05-25 04:19:32.361951 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:32.361968 | orchestrator | 2025-05-25 04:19:32.361985 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-05-25 04:19:32.362004 | orchestrator | Sunday 25 May 2025 04:19:26 +0000 (0:00:00.124) 0:00:10.004 ************ 2025-05-25 04:19:32.362095 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:32.362108 | orchestrator | 2025-05-25 04:19:32.362119 | orchestrator | TASK [Gather status data] ****************************************************** 2025-05-25 04:19:32.362130 | orchestrator | Sunday 25 May 2025 04:19:27 +0000 (0:00:00.119) 0:00:10.124 ************ 2025-05-25 04:19:32.362141 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:19:32.362151 | orchestrator | 2025-05-25 04:19:32.362162 | orchestrator | TASK [Set health test data] **************************************************** 2025-05-25 04:19:32.362174 | orchestrator | Sunday 25 May 2025 04:19:28 +0000 (0:00:01.344) 0:00:11.468 ************ 2025-05-25 04:19:32.362184 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:32.362195 | orchestrator | 2025-05-25 04:19:32.362206 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-05-25 04:19:32.362217 | orchestrator | Sunday 25 May 2025 04:19:28 +0000 (0:00:00.311) 0:00:11.780 ************ 2025-05-25 04:19:32.362227 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:32.362238 | orchestrator | 2025-05-25 04:19:32.362249 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-05-25 04:19:32.362260 | orchestrator | Sunday 25 May 2025 04:19:28 +0000 (0:00:00.127) 0:00:11.907 ************ 2025-05-25 04:19:32.362293 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:32.362312 | orchestrator | 2025-05-25 04:19:32.362323 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-05-25 04:19:32.362334 | orchestrator | Sunday 25 May 2025 04:19:29 +0000 (0:00:00.159) 0:00:12.067 ************ 2025-05-25 04:19:32.362345 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:32.362355 | orchestrator | 2025-05-25 04:19:32.362366 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-05-25 04:19:32.362377 | orchestrator | Sunday 25 May 2025 04:19:29 +0000 (0:00:00.131) 0:00:12.198 ************ 2025-05-25 04:19:32.362388 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:32.362398 | orchestrator | 2025-05-25 04:19:32.362409 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-25 04:19:32.362432 | orchestrator | Sunday 25 May 2025 04:19:29 +0000 (0:00:00.347) 0:00:12.546 ************ 2025-05-25 04:19:32.362443 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-25 04:19:32.362454 | orchestrator | 2025-05-25 04:19:32.362465 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-25 04:19:32.362485 | orchestrator | Sunday 25 May 2025 04:19:29 +0000 (0:00:00.243) 0:00:12.790 ************ 2025-05-25 04:19:32.362496 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:32.362507 | orchestrator | 2025-05-25 04:19:32.362518 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-25 04:19:32.362529 | orchestrator | Sunday 25 May 2025 04:19:30 +0000 (0:00:00.240) 0:00:13.030 ************ 2025-05-25 04:19:32.362540 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-25 04:19:32.362551 | orchestrator | 2025-05-25 04:19:32.362562 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-25 04:19:32.362578 | orchestrator | Sunday 25 May 2025 04:19:31 +0000 (0:00:01.609) 0:00:14.639 ************ 2025-05-25 04:19:32.362589 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-25 04:19:32.362600 | orchestrator | 2025-05-25 04:19:32.362610 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-25 04:19:32.362621 | orchestrator | Sunday 25 May 2025 04:19:31 +0000 (0:00:00.251) 0:00:14.891 ************ 2025-05-25 04:19:32.362632 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-25 04:19:32.362643 | orchestrator | 2025-05-25 04:19:32.362664 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:19:34.659121 | orchestrator | Sunday 25 May 2025 04:19:32 +0000 (0:00:00.256) 0:00:15.147 ************ 2025-05-25 04:19:34.659223 | orchestrator | 2025-05-25 04:19:34.659238 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:19:34.659250 | orchestrator | Sunday 25 May 2025 04:19:32 +0000 (0:00:00.074) 0:00:15.222 ************ 2025-05-25 04:19:34.659261 | orchestrator | 2025-05-25 04:19:34.659304 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:19:34.659317 | orchestrator | Sunday 25 May 2025 04:19:32 +0000 (0:00:00.068) 0:00:15.290 ************ 2025-05-25 04:19:34.659328 | orchestrator | 2025-05-25 04:19:34.659339 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-25 04:19:34.659349 | orchestrator | Sunday 25 May 2025 04:19:32 +0000 (0:00:00.069) 0:00:15.360 ************ 2025-05-25 04:19:34.659361 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-25 04:19:34.659372 | orchestrator | 2025-05-25 04:19:34.659383 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-25 04:19:34.659412 | orchestrator | Sunday 25 May 2025 04:19:33 +0000 (0:00:01.450) 0:00:16.811 ************ 2025-05-25 04:19:34.659423 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-05-25 04:19:34.659434 | orchestrator |  "msg": [ 2025-05-25 04:19:34.659446 | orchestrator |  "Validator run completed.", 2025-05-25 04:19:34.659458 | orchestrator |  "You can find the report file here:", 2025-05-25 04:19:34.659469 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-05-25T04:19:17+00:00-report.json", 2025-05-25 04:19:34.659480 | orchestrator |  "on the following host:", 2025-05-25 04:19:34.659491 | orchestrator |  "testbed-manager" 2025-05-25 04:19:34.659502 | orchestrator |  ] 2025-05-25 04:19:34.659513 | orchestrator | } 2025-05-25 04:19:34.659524 | orchestrator | 2025-05-25 04:19:34.659535 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:19:34.659547 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-25 04:19:34.659560 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 04:19:34.659571 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 04:19:34.659582 | orchestrator | 2025-05-25 04:19:34.659592 | orchestrator | 2025-05-25 04:19:34.659603 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:19:34.659636 | orchestrator | Sunday 25 May 2025 04:19:34 +0000 (0:00:00.561) 0:00:17.372 ************ 2025-05-25 04:19:34.659649 | orchestrator | =============================================================================== 2025-05-25 04:19:34.659662 | orchestrator | Aggregate test results step one ----------------------------------------- 1.61s 2025-05-25 04:19:34.659674 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.58s 2025-05-25 04:19:34.659687 | orchestrator | Write report file ------------------------------------------------------- 1.45s 2025-05-25 04:19:34.659699 | orchestrator | Gather status data ------------------------------------------------------ 1.34s 2025-05-25 04:19:34.659712 | orchestrator | Get container info ------------------------------------------------------ 0.97s 2025-05-25 04:19:34.659724 | orchestrator | Create report output directory ------------------------------------------ 0.80s 2025-05-25 04:19:34.659737 | orchestrator | Get timestamp for report file ------------------------------------------- 0.65s 2025-05-25 04:19:34.659750 | orchestrator | Aggregate test results step one ----------------------------------------- 0.63s 2025-05-25 04:19:34.659762 | orchestrator | Print report file information ------------------------------------------- 0.56s 2025-05-25 04:19:34.659774 | orchestrator | Set test result to passed if container is existing ---------------------- 0.47s 2025-05-25 04:19:34.659786 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.35s 2025-05-25 04:19:34.659799 | orchestrator | Set quorum test data ---------------------------------------------------- 0.33s 2025-05-25 04:19:34.659811 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.31s 2025-05-25 04:19:34.659823 | orchestrator | Set health test data ---------------------------------------------------- 0.31s 2025-05-25 04:19:34.659835 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.31s 2025-05-25 04:19:34.659847 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.30s 2025-05-25 04:19:34.659859 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2025-05-25 04:19:34.659873 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2025-05-25 04:19:34.659885 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-05-25 04:19:34.659897 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2025-05-25 04:19:34.890831 | orchestrator | + osism validate ceph-mgrs 2025-05-25 04:19:55.294254 | orchestrator | 2025-05-25 04:19:55.294455 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-05-25 04:19:55.294473 | orchestrator | 2025-05-25 04:19:55.294486 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-25 04:19:55.294498 | orchestrator | Sunday 25 May 2025 04:19:40 +0000 (0:00:00.438) 0:00:00.438 ************ 2025-05-25 04:19:55.294509 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-25 04:19:55.294521 | orchestrator | 2025-05-25 04:19:55.294532 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-25 04:19:55.294543 | orchestrator | Sunday 25 May 2025 04:19:41 +0000 (0:00:00.624) 0:00:01.062 ************ 2025-05-25 04:19:55.294553 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-25 04:19:55.294564 | orchestrator | 2025-05-25 04:19:55.294575 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-25 04:19:55.294586 | orchestrator | Sunday 25 May 2025 04:19:42 +0000 (0:00:00.813) 0:00:01.876 ************ 2025-05-25 04:19:55.294597 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:55.294609 | orchestrator | 2025-05-25 04:19:55.294620 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-05-25 04:19:55.294630 | orchestrator | Sunday 25 May 2025 04:19:42 +0000 (0:00:00.243) 0:00:02.119 ************ 2025-05-25 04:19:55.294642 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:55.294653 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:19:55.294663 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:19:55.294698 | orchestrator | 2025-05-25 04:19:55.294710 | orchestrator | TASK [Get container info] ****************************************************** 2025-05-25 04:19:55.294721 | orchestrator | Sunday 25 May 2025 04:19:42 +0000 (0:00:00.301) 0:00:02.421 ************ 2025-05-25 04:19:55.294731 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:19:55.294742 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:55.294767 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:19:55.294781 | orchestrator | 2025-05-25 04:19:55.294794 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-05-25 04:19:55.294806 | orchestrator | Sunday 25 May 2025 04:19:43 +0000 (0:00:00.932) 0:00:03.353 ************ 2025-05-25 04:19:55.294818 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:55.294831 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:19:55.294843 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:19:55.294855 | orchestrator | 2025-05-25 04:19:55.294867 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-05-25 04:19:55.294879 | orchestrator | Sunday 25 May 2025 04:19:44 +0000 (0:00:00.312) 0:00:03.665 ************ 2025-05-25 04:19:55.294892 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:55.294904 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:19:55.294915 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:19:55.294928 | orchestrator | 2025-05-25 04:19:55.294940 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-25 04:19:55.294952 | orchestrator | Sunday 25 May 2025 04:19:44 +0000 (0:00:00.502) 0:00:04.168 ************ 2025-05-25 04:19:55.294964 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:55.294976 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:19:55.294988 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:19:55.295000 | orchestrator | 2025-05-25 04:19:55.295012 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-05-25 04:19:55.295025 | orchestrator | Sunday 25 May 2025 04:19:44 +0000 (0:00:00.299) 0:00:04.467 ************ 2025-05-25 04:19:55.295037 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:55.295049 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:19:55.295062 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:19:55.295074 | orchestrator | 2025-05-25 04:19:55.295086 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-05-25 04:19:55.295098 | orchestrator | Sunday 25 May 2025 04:19:45 +0000 (0:00:00.279) 0:00:04.746 ************ 2025-05-25 04:19:55.295110 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:55.295122 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:19:55.295134 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:19:55.295147 | orchestrator | 2025-05-25 04:19:55.295159 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-25 04:19:55.295171 | orchestrator | Sunday 25 May 2025 04:19:45 +0000 (0:00:00.294) 0:00:05.041 ************ 2025-05-25 04:19:55.295181 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:55.295192 | orchestrator | 2025-05-25 04:19:55.295203 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-25 04:19:55.295213 | orchestrator | Sunday 25 May 2025 04:19:46 +0000 (0:00:00.659) 0:00:05.700 ************ 2025-05-25 04:19:55.295224 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:55.295235 | orchestrator | 2025-05-25 04:19:55.295245 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-25 04:19:55.295256 | orchestrator | Sunday 25 May 2025 04:19:46 +0000 (0:00:00.254) 0:00:05.954 ************ 2025-05-25 04:19:55.295289 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:55.295300 | orchestrator | 2025-05-25 04:19:55.295312 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:19:55.295323 | orchestrator | Sunday 25 May 2025 04:19:46 +0000 (0:00:00.259) 0:00:06.213 ************ 2025-05-25 04:19:55.295334 | orchestrator | 2025-05-25 04:19:55.295345 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:19:55.295356 | orchestrator | Sunday 25 May 2025 04:19:46 +0000 (0:00:00.091) 0:00:06.305 ************ 2025-05-25 04:19:55.295376 | orchestrator | 2025-05-25 04:19:55.295387 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:19:55.295398 | orchestrator | Sunday 25 May 2025 04:19:46 +0000 (0:00:00.070) 0:00:06.375 ************ 2025-05-25 04:19:55.295408 | orchestrator | 2025-05-25 04:19:55.295419 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-25 04:19:55.295430 | orchestrator | Sunday 25 May 2025 04:19:46 +0000 (0:00:00.071) 0:00:06.447 ************ 2025-05-25 04:19:55.295440 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:55.295451 | orchestrator | 2025-05-25 04:19:55.295462 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-05-25 04:19:55.295473 | orchestrator | Sunday 25 May 2025 04:19:47 +0000 (0:00:00.255) 0:00:06.702 ************ 2025-05-25 04:19:55.295484 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:55.295495 | orchestrator | 2025-05-25 04:19:55.295535 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-05-25 04:19:55.295547 | orchestrator | Sunday 25 May 2025 04:19:47 +0000 (0:00:00.247) 0:00:06.950 ************ 2025-05-25 04:19:55.295558 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:55.295569 | orchestrator | 2025-05-25 04:19:55.295580 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-05-25 04:19:55.295590 | orchestrator | Sunday 25 May 2025 04:19:47 +0000 (0:00:00.119) 0:00:07.069 ************ 2025-05-25 04:19:55.295601 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:19:55.295612 | orchestrator | 2025-05-25 04:19:55.295622 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-05-25 04:19:55.295633 | orchestrator | Sunday 25 May 2025 04:19:49 +0000 (0:00:01.923) 0:00:08.992 ************ 2025-05-25 04:19:55.295643 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:55.295654 | orchestrator | 2025-05-25 04:19:55.295664 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-05-25 04:19:55.295675 | orchestrator | Sunday 25 May 2025 04:19:49 +0000 (0:00:00.264) 0:00:09.257 ************ 2025-05-25 04:19:55.295685 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:55.295696 | orchestrator | 2025-05-25 04:19:55.295707 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-05-25 04:19:55.295718 | orchestrator | Sunday 25 May 2025 04:19:50 +0000 (0:00:00.700) 0:00:09.957 ************ 2025-05-25 04:19:55.295728 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:55.295739 | orchestrator | 2025-05-25 04:19:55.295750 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-05-25 04:19:55.295761 | orchestrator | Sunday 25 May 2025 04:19:50 +0000 (0:00:00.135) 0:00:10.092 ************ 2025-05-25 04:19:55.295771 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:19:55.295782 | orchestrator | 2025-05-25 04:19:55.295796 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-25 04:19:55.295838 | orchestrator | Sunday 25 May 2025 04:19:50 +0000 (0:00:00.138) 0:00:10.231 ************ 2025-05-25 04:19:55.295859 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-25 04:19:55.295876 | orchestrator | 2025-05-25 04:19:55.295895 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-25 04:19:55.295914 | orchestrator | Sunday 25 May 2025 04:19:50 +0000 (0:00:00.241) 0:00:10.472 ************ 2025-05-25 04:19:55.295930 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:19:55.295940 | orchestrator | 2025-05-25 04:19:55.295951 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-25 04:19:55.295962 | orchestrator | Sunday 25 May 2025 04:19:51 +0000 (0:00:00.240) 0:00:10.713 ************ 2025-05-25 04:19:55.295972 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-25 04:19:55.295983 | orchestrator | 2025-05-25 04:19:55.295994 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-25 04:19:55.296014 | orchestrator | Sunday 25 May 2025 04:19:52 +0000 (0:00:01.253) 0:00:11.966 ************ 2025-05-25 04:19:55.296025 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-25 04:19:55.296044 | orchestrator | 2025-05-25 04:19:55.296054 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-25 04:19:55.296065 | orchestrator | Sunday 25 May 2025 04:19:52 +0000 (0:00:00.239) 0:00:12.206 ************ 2025-05-25 04:19:55.296076 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-25 04:19:55.296087 | orchestrator | 2025-05-25 04:19:55.296098 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:19:55.296108 | orchestrator | Sunday 25 May 2025 04:19:52 +0000 (0:00:00.269) 0:00:12.475 ************ 2025-05-25 04:19:55.296119 | orchestrator | 2025-05-25 04:19:55.296130 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:19:55.296140 | orchestrator | Sunday 25 May 2025 04:19:53 +0000 (0:00:00.086) 0:00:12.562 ************ 2025-05-25 04:19:55.296151 | orchestrator | 2025-05-25 04:19:55.296162 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:19:55.296172 | orchestrator | Sunday 25 May 2025 04:19:53 +0000 (0:00:00.070) 0:00:12.632 ************ 2025-05-25 04:19:55.296183 | orchestrator | 2025-05-25 04:19:55.296193 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-25 04:19:55.296204 | orchestrator | Sunday 25 May 2025 04:19:53 +0000 (0:00:00.073) 0:00:12.706 ************ 2025-05-25 04:19:55.296215 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-25 04:19:55.296225 | orchestrator | 2025-05-25 04:19:55.296236 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-25 04:19:55.296247 | orchestrator | Sunday 25 May 2025 04:19:54 +0000 (0:00:01.661) 0:00:14.367 ************ 2025-05-25 04:19:55.296257 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-05-25 04:19:55.296288 | orchestrator |  "msg": [ 2025-05-25 04:19:55.296299 | orchestrator |  "Validator run completed.", 2025-05-25 04:19:55.296310 | orchestrator |  "You can find the report file here:", 2025-05-25 04:19:55.296321 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-05-25T04:19:41+00:00-report.json", 2025-05-25 04:19:55.296333 | orchestrator |  "on the following host:", 2025-05-25 04:19:55.296344 | orchestrator |  "testbed-manager" 2025-05-25 04:19:55.296355 | orchestrator |  ] 2025-05-25 04:19:55.296366 | orchestrator | } 2025-05-25 04:19:55.296377 | orchestrator | 2025-05-25 04:19:55.296388 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:19:55.296400 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-25 04:19:55.296412 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 04:19:55.296432 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 04:19:55.580374 | orchestrator | 2025-05-25 04:19:55.580443 | orchestrator | 2025-05-25 04:19:55.580449 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:19:55.580455 | orchestrator | Sunday 25 May 2025 04:19:55 +0000 (0:00:00.408) 0:00:14.776 ************ 2025-05-25 04:19:55.580460 | orchestrator | =============================================================================== 2025-05-25 04:19:55.580464 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.92s 2025-05-25 04:19:55.580468 | orchestrator | Write report file ------------------------------------------------------- 1.66s 2025-05-25 04:19:55.580472 | orchestrator | Aggregate test results step one ----------------------------------------- 1.25s 2025-05-25 04:19:55.580476 | orchestrator | Get container info ------------------------------------------------------ 0.93s 2025-05-25 04:19:55.580480 | orchestrator | Create report output directory ------------------------------------------ 0.81s 2025-05-25 04:19:55.580483 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.70s 2025-05-25 04:19:55.580509 | orchestrator | Aggregate test results step one ----------------------------------------- 0.66s 2025-05-25 04:19:55.580513 | orchestrator | Get timestamp for report file ------------------------------------------- 0.62s 2025-05-25 04:19:55.580517 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2025-05-25 04:19:55.580520 | orchestrator | Print report file information ------------------------------------------- 0.41s 2025-05-25 04:19:55.580524 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2025-05-25 04:19:55.580535 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-05-25 04:19:55.580539 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2025-05-25 04:19:55.580543 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.29s 2025-05-25 04:19:55.580547 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.28s 2025-05-25 04:19:55.580551 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2025-05-25 04:19:55.580554 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.26s 2025-05-25 04:19:55.580558 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2025-05-25 04:19:55.580562 | orchestrator | Print report file information ------------------------------------------- 0.26s 2025-05-25 04:19:55.580565 | orchestrator | Aggregate test results step two ----------------------------------------- 0.25s 2025-05-25 04:19:55.810303 | orchestrator | + osism validate ceph-osds 2025-05-25 04:20:06.168726 | orchestrator | 2025-05-25 04:20:06.168837 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-05-25 04:20:06.168854 | orchestrator | 2025-05-25 04:20:06.168866 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-25 04:20:06.168878 | orchestrator | Sunday 25 May 2025 04:20:01 +0000 (0:00:00.417) 0:00:00.417 ************ 2025-05-25 04:20:06.168890 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-25 04:20:06.168901 | orchestrator | 2025-05-25 04:20:06.168912 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-25 04:20:06.168923 | orchestrator | Sunday 25 May 2025 04:20:02 +0000 (0:00:00.663) 0:00:01.081 ************ 2025-05-25 04:20:06.168934 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-25 04:20:06.168945 | orchestrator | 2025-05-25 04:20:06.168956 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-25 04:20:06.168967 | orchestrator | Sunday 25 May 2025 04:20:02 +0000 (0:00:00.418) 0:00:01.499 ************ 2025-05-25 04:20:06.168978 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-25 04:20:06.168989 | orchestrator | 2025-05-25 04:20:06.168999 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-25 04:20:06.169010 | orchestrator | Sunday 25 May 2025 04:20:03 +0000 (0:00:00.903) 0:00:02.403 ************ 2025-05-25 04:20:06.169022 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:06.169033 | orchestrator | 2025-05-25 04:20:06.169045 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-05-25 04:20:06.169056 | orchestrator | Sunday 25 May 2025 04:20:03 +0000 (0:00:00.120) 0:00:02.523 ************ 2025-05-25 04:20:06.169067 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:20:06.169078 | orchestrator | 2025-05-25 04:20:06.169089 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-05-25 04:20:06.169100 | orchestrator | Sunday 25 May 2025 04:20:04 +0000 (0:00:00.134) 0:00:02.658 ************ 2025-05-25 04:20:06.169111 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:20:06.169122 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:20:06.169132 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:20:06.169143 | orchestrator | 2025-05-25 04:20:06.169154 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-05-25 04:20:06.169165 | orchestrator | Sunday 25 May 2025 04:20:04 +0000 (0:00:00.315) 0:00:02.974 ************ 2025-05-25 04:20:06.169200 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:06.169212 | orchestrator | 2025-05-25 04:20:06.169222 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-05-25 04:20:06.169233 | orchestrator | Sunday 25 May 2025 04:20:04 +0000 (0:00:00.139) 0:00:03.113 ************ 2025-05-25 04:20:06.169244 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:06.169255 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:20:06.169312 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:20:06.169332 | orchestrator | 2025-05-25 04:20:06.169353 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-05-25 04:20:06.169374 | orchestrator | Sunday 25 May 2025 04:20:04 +0000 (0:00:00.307) 0:00:03.420 ************ 2025-05-25 04:20:06.169393 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:06.169414 | orchestrator | 2025-05-25 04:20:06.169435 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-25 04:20:06.169456 | orchestrator | Sunday 25 May 2025 04:20:05 +0000 (0:00:00.555) 0:00:03.976 ************ 2025-05-25 04:20:06.169476 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:06.169493 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:20:06.169505 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:20:06.169518 | orchestrator | 2025-05-25 04:20:06.169531 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-05-25 04:20:06.169544 | orchestrator | Sunday 25 May 2025 04:20:05 +0000 (0:00:00.529) 0:00:04.505 ************ 2025-05-25 04:20:06.169559 | orchestrator | skipping: [testbed-node-3] => (item={'id': '22e23a4a27b797181a7057cb239c833c1134403e1a7c168cce49a17357fb06b6', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-25 04:20:06.169574 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8e9a524d03103094ccb5cbeac3f77feb10719454590ca0f44684977ce49b35e9', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-25 04:20:06.169589 | orchestrator | skipping: [testbed-node-3] => (item={'id': '02c674a3ca2f23a4f8762e2e86966dc3db2442417b9f159c222191341222f2e4', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-25 04:20:06.169638 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5a8a5704dc5f73b408889aaffaf6f40df2a0c7ae66441a86a489bd3503c65f0d', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-25 04:20:06.169652 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ca76f8a7cbc25acb887bd16b7c7ade75d7c1c1254c79bfbefe9c0d06b040c67e', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-25 04:20:06.169680 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a61c8a7d676fe08bb55d8243c76d5f27c82a07159abdaa1a13ecec6c5a444174', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-25 04:20:06.169692 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bcccbc37383c5307d73f07c398a37b2fafe7c00cdd22005c896e6340779d82a4', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-05-25 04:20:06.169704 | orchestrator | skipping: [testbed-node-3] => (item={'id': '284eeff7054000b9fc52b846dc02545622a81a4d570ff39e937fa0951fd33704', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-25 04:20:06.169715 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f9e4e2eafd2f871f9c3b8d4f5df6ff34fbbf63ef23c968744503b976e5b7cc9b', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-25 04:20:06.169741 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b1f2ff16d09c51c5102c07e752e77022e5706d4e97442640f0c9e2c9eb0bfe4d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-05-25 04:20:06.169753 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7ff742d2e8595f7585d5801a268736628a0fa908c864c2fb7129d49f4b72390d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2025-05-25 04:20:06.169764 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3c57342e8de363a8aad185a1d290190d14c799caa3a6de6e7a04b3c24984b1f5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-25 04:20:06.169775 | orchestrator | ok: [testbed-node-3] => (item={'id': '841b57e9303ac8cb3e1bf77988abf6ccc3fa1c0fd82a8eeb64efb86b06e89d69', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-25 04:20:06.169787 | orchestrator | ok: [testbed-node-3] => (item={'id': 'f7d7fc4d22cc42e22fab2a4a84372ba995818797a91971c87c012229610bad20', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-25 04:20:06.169798 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8c867ee50370d51f030930e1abf8d94c38c12b03c401cea74939725532e9970e', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-05-25 04:20:06.169809 | orchestrator | skipping: [testbed-node-3] => (item={'id': '29b107023ad1e07283892af2faafdd25d35729c959926cbbf478c7d6dcea2793', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-05-25 04:20:06.169820 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1734da6c913ef54467009531ec50da22d56ea3c4ed8c78faefbe636f60c4589a', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-05-25 04:20:06.169832 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e5dba3fd5693fc249c78cc5110a1edab0ee93783288e698fd23b01ff873a7318', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-05-25 04:20:06.169843 | orchestrator | skipping: [testbed-node-3] => (item={'id': '44872bdf23f315925cac2e3beaa200cabbbb83be2d1263bac4da12a9f9680ddb', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-05-25 04:20:06.169854 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd0e1e4cd3990dc27523cc559e4f9a58bb2c996f3b52e74f94097f4a300f049b2', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-05-25 04:20:06.169872 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd34e78deeb1bf406ebd0c56514f4943247ec7df9cd323c86393f28f456991b7d', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-25 04:20:06.169890 | orchestrator | skipping: [testbed-node-4] => (item={'id': '93dd04d478464692909a921006707c1b427d3dbc9895220dea8cac557103c528', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-25 04:20:06.335676 | orchestrator | skipping: [testbed-node-4] => (item={'id': '735587cabd758fa92d9f9666fe922658428fe2236d54bd6ef03c31b01be55ba7', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-25 04:20:06.335803 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'af0c9d9b11efa26160ee68f1c6794eaf7ae5d27592f1bab86932fb2771e3bdf6', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-25 04:20:06.335819 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f6e8193837584d50de158a250edeaadba7201e73fb553c2a9a3d3bf87e2c2a63', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-25 04:20:06.335833 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'aaa8a97b1481db27f33482bdfa8cff4b7d5752272f81ffdb4e125646e71ba262', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-25 04:20:06.335845 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0849153f3c0034a46c97dd91d9b6041b18b98b4ce8d9aaa2224dda05b50c5d2b', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-05-25 04:20:06.335856 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e1edb900d6d005f211e112b32bd2820ad3e6ae6c0fca99c66de05ecd49b12fdf', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-25 04:20:06.335867 | orchestrator | skipping: [testbed-node-4] => (item={'id': '23101ed8606f04da73fda6f16ce4790e538a5bf519d5f71f677a0e83d9a671d1', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-25 04:20:06.335879 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3a7c4b4327d68a8f659051b97e59610c502922d2f54b4a64d7858935873f04fd', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-05-25 04:20:06.335889 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f0efa22018778a2cb1844fed8896a8ab6e4ba71fff9e042904aa495dbf51848c', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2025-05-25 04:20:06.335900 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f117672f1696fd9604c3d8f14237f5d3f920a1dc3d3b223dd45182320bec1bf7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-25 04:20:06.335911 | orchestrator | ok: [testbed-node-4] => (item={'id': 'aafc4cd6dce6336894527f2e1c0cf7c6ebe8ca772f62121048749c07c6655a05', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-25 04:20:06.335923 | orchestrator | ok: [testbed-node-4] => (item={'id': 'c776ef3e8cb408da1b7518c03ed79a0d75551b74576c41d8b8783b83c92b2ca2', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-25 04:20:06.335949 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6cc476042578f94a950d10efccdf86619b4bf62b71695d794159b4ac3fafcbbe', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-05-25 04:20:06.335961 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7406e0d1f5f6122942b0906dfd4074a1bc159effc6798c821873ca08ee54c9a2', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-05-25 04:20:06.335972 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2ed2de1d8ebf61fb9ba12810fd92fd7dd4fd3fab06cdb493f4eeda6cbdb8ac0b', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-05-25 04:20:06.336006 | orchestrator | skipping: [testbed-node-4] => (item={'id': '33e05522caa9362c3da50fb79cfa69d8b9f1dd2af5e70c9b66f897127254cbce', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-05-25 04:20:06.336047 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a305787a29b016a6d1b91d9acc1c29a774727584ab05aceac2458ec0c0dbb2de', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-05-25 04:20:06.336059 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6c082c857672906949ec4d60eef0303c5c0ede6021bf6103c2e97cc16a3c1ec2', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-05-25 04:20:06.336071 | orchestrator | skipping: [testbed-node-5] => (item={'id': '74e5fa7ec4fd68c2a9714e244c42c6f56e56b0d5cdf40fa22dfa3ecb67592b7b', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-25 04:20:06.336082 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6cdecf19b310f9976fe9a1d752112f28af0c75bc8f6912d38828e0def76d5430', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-25 04:20:06.336093 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8dafba28ccf933c660264516d783d4992632d3535c7da19e3b3ac3ec18139680', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-25 04:20:06.336104 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'df868a425d2d7854fbc61ec9f22c6d4b5b8af41d66b65d9fd397b9d837fcfcb7', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-25 04:20:06.336116 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7a0694acea320f32a4124aed5edc3c9df2579c47f6148bcd95888103c271d155', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-25 04:20:06.336127 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c268c89abfc2af6100a7189f64fef6d1a0c32f8c21eb45efd8f8d4db44f35db9', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-25 04:20:06.336138 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eb9482038fd68345a6e6bd93cc04fae3f017c454e79c530739ae0cc3ea04cd2c', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-05-25 04:20:06.336149 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3b9e15b88913b20853e7470d67301717b76271d6037ca306ed43a201507c1425', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-25 04:20:06.336160 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b4ab1699018a366708ec7b679b7d4b4a8454f459db032e2d192ddbf1b8b66692', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-25 04:20:06.336177 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd3890320c261b9e65025b0e526abf2360eba9c6ae018dc7e1d4d6a292ddd755a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-05-25 04:20:06.336188 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a8a3151f465371196dee4c3e11e48c11f47bf35a74c8119fbfad5f1178553285', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2025-05-25 04:20:06.336206 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bf09b715dc8a93ec52ed6a31c9a2720a41d8307d1ab54109bf4137ea368ac0f5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-25 04:20:06.336224 | orchestrator | ok: [testbed-node-5] => (item={'id': 'ba6675f99a60caa6b5847403a6b9c025fccbb0aca28fbc8ce27a0963d92b08cc', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-25 04:20:14.821676 | orchestrator | ok: [testbed-node-5] => (item={'id': '33825aca616b44af8a3a38505dc1fe193533d0c193006096065ae83d51a8af44', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-25 04:20:14.821844 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e9bb6475cbe311bf2db8a8959a61d427f7cd808d829505230cf7940670f522cd', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-05-25 04:20:14.821864 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2ce631d43d694eb7f5c7ccb3267d4df762e7eea11d5c9cdacf4902b19ce7ab7d', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-05-25 04:20:14.821877 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd77c38773ef5f5578c170018d48479f2ee54f73fd8a7224f5ffffcf6aa8cdeb6', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-05-25 04:20:14.821888 | orchestrator | skipping: [testbed-node-5] => (item={'id': '116446a5074cffef2282ef75ab4c90f68173de2b58a81362d04136591a9970a0', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-05-25 04:20:14.821898 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fbd070d7e963db6e19f41f94c2e6895aaefe3e22b187f451d0bab142e8f95768', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-05-25 04:20:14.821909 | orchestrator | skipping: [testbed-node-5] => (item={'id': '496a67ea3796d9c551ff97c9c98e63c53085b02243d123b4e951c54e4f71b811', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-05-25 04:20:14.821919 | orchestrator | 2025-05-25 04:20:14.821930 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-05-25 04:20:14.821941 | orchestrator | Sunday 25 May 2025 04:20:06 +0000 (0:00:00.524) 0:00:05.030 ************ 2025-05-25 04:20:14.821951 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:14.822252 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:20:14.822370 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:20:14.822382 | orchestrator | 2025-05-25 04:20:14.822394 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-05-25 04:20:14.822405 | orchestrator | Sunday 25 May 2025 04:20:06 +0000 (0:00:00.316) 0:00:05.347 ************ 2025-05-25 04:20:14.822416 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:20:14.822428 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:20:14.822438 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:20:14.822449 | orchestrator | 2025-05-25 04:20:14.822461 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-05-25 04:20:14.822472 | orchestrator | Sunday 25 May 2025 04:20:07 +0000 (0:00:00.502) 0:00:05.849 ************ 2025-05-25 04:20:14.822538 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:14.822549 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:20:14.822559 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:20:14.822586 | orchestrator | 2025-05-25 04:20:14.822596 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-25 04:20:14.822632 | orchestrator | Sunday 25 May 2025 04:20:07 +0000 (0:00:00.304) 0:00:06.154 ************ 2025-05-25 04:20:14.822643 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:14.822652 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:20:14.822661 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:20:14.822670 | orchestrator | 2025-05-25 04:20:14.822680 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-05-25 04:20:14.822690 | orchestrator | Sunday 25 May 2025 04:20:07 +0000 (0:00:00.296) 0:00:06.451 ************ 2025-05-25 04:20:14.822777 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-05-25 04:20:14.822789 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-05-25 04:20:14.822799 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:20:14.822809 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-05-25 04:20:14.822818 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-05-25 04:20:14.822828 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:20:14.822837 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-05-25 04:20:14.822847 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-05-25 04:20:14.822873 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:20:14.822884 | orchestrator | 2025-05-25 04:20:14.822893 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-05-25 04:20:14.822903 | orchestrator | Sunday 25 May 2025 04:20:08 +0000 (0:00:00.294) 0:00:06.746 ************ 2025-05-25 04:20:14.822912 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:14.822921 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:20:14.822931 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:20:14.822940 | orchestrator | 2025-05-25 04:20:14.822968 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-05-25 04:20:14.822978 | orchestrator | Sunday 25 May 2025 04:20:08 +0000 (0:00:00.466) 0:00:07.212 ************ 2025-05-25 04:20:14.822988 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:20:14.823086 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:20:14.823109 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:20:14.823119 | orchestrator | 2025-05-25 04:20:14.823128 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-05-25 04:20:14.823137 | orchestrator | Sunday 25 May 2025 04:20:08 +0000 (0:00:00.294) 0:00:07.507 ************ 2025-05-25 04:20:14.823147 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:20:14.823156 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:20:14.823165 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:20:14.823192 | orchestrator | 2025-05-25 04:20:14.823202 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-05-25 04:20:14.823211 | orchestrator | Sunday 25 May 2025 04:20:09 +0000 (0:00:00.281) 0:00:07.788 ************ 2025-05-25 04:20:14.823221 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:14.823230 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:20:14.823240 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:20:14.823249 | orchestrator | 2025-05-25 04:20:14.823259 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-25 04:20:14.823340 | orchestrator | Sunday 25 May 2025 04:20:09 +0000 (0:00:00.317) 0:00:08.106 ************ 2025-05-25 04:20:14.823350 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:20:14.823359 | orchestrator | 2025-05-25 04:20:14.823414 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-25 04:20:14.823425 | orchestrator | Sunday 25 May 2025 04:20:10 +0000 (0:00:00.670) 0:00:08.777 ************ 2025-05-25 04:20:14.823434 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:20:14.823444 | orchestrator | 2025-05-25 04:20:14.823531 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-25 04:20:14.823571 | orchestrator | Sunday 25 May 2025 04:20:10 +0000 (0:00:00.241) 0:00:09.018 ************ 2025-05-25 04:20:14.823581 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:20:14.823590 | orchestrator | 2025-05-25 04:20:14.823600 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:20:14.823609 | orchestrator | Sunday 25 May 2025 04:20:10 +0000 (0:00:00.249) 0:00:09.268 ************ 2025-05-25 04:20:14.823618 | orchestrator | 2025-05-25 04:20:14.823628 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:20:14.823638 | orchestrator | Sunday 25 May 2025 04:20:10 +0000 (0:00:00.066) 0:00:09.335 ************ 2025-05-25 04:20:14.823647 | orchestrator | 2025-05-25 04:20:14.823657 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:20:14.823666 | orchestrator | Sunday 25 May 2025 04:20:10 +0000 (0:00:00.070) 0:00:09.405 ************ 2025-05-25 04:20:14.823675 | orchestrator | 2025-05-25 04:20:14.823772 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-25 04:20:14.823782 | orchestrator | Sunday 25 May 2025 04:20:10 +0000 (0:00:00.072) 0:00:09.477 ************ 2025-05-25 04:20:14.823791 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:20:14.823801 | orchestrator | 2025-05-25 04:20:14.823810 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-05-25 04:20:14.823819 | orchestrator | Sunday 25 May 2025 04:20:11 +0000 (0:00:00.238) 0:00:09.716 ************ 2025-05-25 04:20:14.823858 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:20:14.823868 | orchestrator | 2025-05-25 04:20:14.823878 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-25 04:20:14.823887 | orchestrator | Sunday 25 May 2025 04:20:11 +0000 (0:00:00.253) 0:00:09.969 ************ 2025-05-25 04:20:14.823896 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:14.823906 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:20:14.823915 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:20:14.823939 | orchestrator | 2025-05-25 04:20:14.823949 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-05-25 04:20:14.823958 | orchestrator | Sunday 25 May 2025 04:20:11 +0000 (0:00:00.296) 0:00:10.265 ************ 2025-05-25 04:20:14.823968 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:14.823977 | orchestrator | 2025-05-25 04:20:14.823987 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-05-25 04:20:14.823996 | orchestrator | Sunday 25 May 2025 04:20:12 +0000 (0:00:00.623) 0:00:10.888 ************ 2025-05-25 04:20:14.824005 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-25 04:20:14.824015 | orchestrator | 2025-05-25 04:20:14.824024 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-05-25 04:20:14.824033 | orchestrator | Sunday 25 May 2025 04:20:13 +0000 (0:00:01.592) 0:00:12.481 ************ 2025-05-25 04:20:14.824043 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:14.824052 | orchestrator | 2025-05-25 04:20:14.824061 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-05-25 04:20:14.824071 | orchestrator | Sunday 25 May 2025 04:20:13 +0000 (0:00:00.119) 0:00:12.601 ************ 2025-05-25 04:20:14.824172 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:14.824182 | orchestrator | 2025-05-25 04:20:14.824192 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-05-25 04:20:14.824201 | orchestrator | Sunday 25 May 2025 04:20:14 +0000 (0:00:00.316) 0:00:12.917 ************ 2025-05-25 04:20:14.824210 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:20:14.824219 | orchestrator | 2025-05-25 04:20:14.824229 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-05-25 04:20:14.824326 | orchestrator | Sunday 25 May 2025 04:20:14 +0000 (0:00:00.106) 0:00:13.023 ************ 2025-05-25 04:20:14.824338 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:14.824347 | orchestrator | 2025-05-25 04:20:14.824357 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-25 04:20:14.824366 | orchestrator | Sunday 25 May 2025 04:20:14 +0000 (0:00:00.135) 0:00:13.159 ************ 2025-05-25 04:20:14.824383 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:14.824392 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:20:14.824401 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:20:14.824411 | orchestrator | 2025-05-25 04:20:14.824420 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-05-25 04:20:14.824439 | orchestrator | Sunday 25 May 2025 04:20:14 +0000 (0:00:00.282) 0:00:13.442 ************ 2025-05-25 04:20:26.427727 | orchestrator | changed: [testbed-node-3] 2025-05-25 04:20:26.427849 | orchestrator | changed: [testbed-node-4] 2025-05-25 04:20:26.427865 | orchestrator | changed: [testbed-node-5] 2025-05-25 04:20:26.427877 | orchestrator | 2025-05-25 04:20:26.427889 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-05-25 04:20:26.427902 | orchestrator | Sunday 25 May 2025 04:20:17 +0000 (0:00:02.443) 0:00:15.885 ************ 2025-05-25 04:20:26.427929 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:26.427942 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:20:26.427953 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:20:26.427963 | orchestrator | 2025-05-25 04:20:26.427988 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-05-25 04:20:26.427999 | orchestrator | Sunday 25 May 2025 04:20:17 +0000 (0:00:00.297) 0:00:16.182 ************ 2025-05-25 04:20:26.428010 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:26.428022 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:20:26.428033 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:20:26.428044 | orchestrator | 2025-05-25 04:20:26.428055 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-05-25 04:20:26.428066 | orchestrator | Sunday 25 May 2025 04:20:18 +0000 (0:00:00.485) 0:00:16.668 ************ 2025-05-25 04:20:26.428077 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:20:26.428088 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:20:26.428099 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:20:26.428110 | orchestrator | 2025-05-25 04:20:26.428121 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-05-25 04:20:26.428132 | orchestrator | Sunday 25 May 2025 04:20:18 +0000 (0:00:00.300) 0:00:16.968 ************ 2025-05-25 04:20:26.428143 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:26.428154 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:20:26.428164 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:20:26.428175 | orchestrator | 2025-05-25 04:20:26.428186 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-05-25 04:20:26.428197 | orchestrator | Sunday 25 May 2025 04:20:18 +0000 (0:00:00.488) 0:00:17.456 ************ 2025-05-25 04:20:26.428208 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:20:26.428218 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:20:26.428229 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:20:26.428240 | orchestrator | 2025-05-25 04:20:26.428251 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-05-25 04:20:26.428294 | orchestrator | Sunday 25 May 2025 04:20:19 +0000 (0:00:00.270) 0:00:17.727 ************ 2025-05-25 04:20:26.428313 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:20:26.428332 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:20:26.428352 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:20:26.428370 | orchestrator | 2025-05-25 04:20:26.428388 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-25 04:20:26.428407 | orchestrator | Sunday 25 May 2025 04:20:19 +0000 (0:00:00.282) 0:00:18.009 ************ 2025-05-25 04:20:26.428419 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:26.428429 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:20:26.428440 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:20:26.428451 | orchestrator | 2025-05-25 04:20:26.428461 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-05-25 04:20:26.428472 | orchestrator | Sunday 25 May 2025 04:20:19 +0000 (0:00:00.476) 0:00:18.485 ************ 2025-05-25 04:20:26.428483 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:26.428516 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:20:26.428526 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:20:26.428537 | orchestrator | 2025-05-25 04:20:26.428548 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-05-25 04:20:26.428559 | orchestrator | Sunday 25 May 2025 04:20:20 +0000 (0:00:00.689) 0:00:19.175 ************ 2025-05-25 04:20:26.428569 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:26.428580 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:20:26.428591 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:20:26.428601 | orchestrator | 2025-05-25 04:20:26.428612 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-05-25 04:20:26.428622 | orchestrator | Sunday 25 May 2025 04:20:20 +0000 (0:00:00.287) 0:00:19.462 ************ 2025-05-25 04:20:26.428633 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:20:26.428644 | orchestrator | skipping: [testbed-node-4] 2025-05-25 04:20:26.428654 | orchestrator | skipping: [testbed-node-5] 2025-05-25 04:20:26.428665 | orchestrator | 2025-05-25 04:20:26.428676 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-05-25 04:20:26.428687 | orchestrator | Sunday 25 May 2025 04:20:21 +0000 (0:00:00.306) 0:00:19.768 ************ 2025-05-25 04:20:26.428697 | orchestrator | ok: [testbed-node-3] 2025-05-25 04:20:26.428708 | orchestrator | ok: [testbed-node-4] 2025-05-25 04:20:26.428719 | orchestrator | ok: [testbed-node-5] 2025-05-25 04:20:26.428729 | orchestrator | 2025-05-25 04:20:26.428740 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-25 04:20:26.428751 | orchestrator | Sunday 25 May 2025 04:20:21 +0000 (0:00:00.464) 0:00:20.232 ************ 2025-05-25 04:20:26.428761 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-25 04:20:26.428786 | orchestrator | 2025-05-25 04:20:26.428797 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-25 04:20:26.428808 | orchestrator | Sunday 25 May 2025 04:20:21 +0000 (0:00:00.257) 0:00:20.490 ************ 2025-05-25 04:20:26.428818 | orchestrator | skipping: [testbed-node-3] 2025-05-25 04:20:26.428829 | orchestrator | 2025-05-25 04:20:26.428840 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-25 04:20:26.428851 | orchestrator | Sunday 25 May 2025 04:20:22 +0000 (0:00:00.231) 0:00:20.722 ************ 2025-05-25 04:20:26.428861 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-25 04:20:26.428872 | orchestrator | 2025-05-25 04:20:26.428883 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-25 04:20:26.428893 | orchestrator | Sunday 25 May 2025 04:20:23 +0000 (0:00:01.523) 0:00:22.245 ************ 2025-05-25 04:20:26.428904 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-25 04:20:26.428915 | orchestrator | 2025-05-25 04:20:26.428925 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-25 04:20:26.428936 | orchestrator | Sunday 25 May 2025 04:20:23 +0000 (0:00:00.269) 0:00:22.515 ************ 2025-05-25 04:20:26.428965 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-25 04:20:26.428977 | orchestrator | 2025-05-25 04:20:26.428988 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:20:26.428998 | orchestrator | Sunday 25 May 2025 04:20:24 +0000 (0:00:00.270) 0:00:22.785 ************ 2025-05-25 04:20:26.429009 | orchestrator | 2025-05-25 04:20:26.429020 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:20:26.429031 | orchestrator | Sunday 25 May 2025 04:20:24 +0000 (0:00:00.065) 0:00:22.851 ************ 2025-05-25 04:20:26.429041 | orchestrator | 2025-05-25 04:20:26.429052 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-25 04:20:26.429063 | orchestrator | Sunday 25 May 2025 04:20:24 +0000 (0:00:00.070) 0:00:22.922 ************ 2025-05-25 04:20:26.429073 | orchestrator | 2025-05-25 04:20:26.429084 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-25 04:20:26.429095 | orchestrator | Sunday 25 May 2025 04:20:24 +0000 (0:00:00.080) 0:00:23.002 ************ 2025-05-25 04:20:26.429113 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-25 04:20:26.429124 | orchestrator | 2025-05-25 04:20:26.429215 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-25 04:20:26.429227 | orchestrator | Sunday 25 May 2025 04:20:25 +0000 (0:00:01.214) 0:00:24.217 ************ 2025-05-25 04:20:26.429238 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-05-25 04:20:26.429249 | orchestrator |  "msg": [ 2025-05-25 04:20:26.429285 | orchestrator |  "Validator run completed.", 2025-05-25 04:20:26.429298 | orchestrator |  "You can find the report file here:", 2025-05-25 04:20:26.429308 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-05-25T04:20:02+00:00-report.json", 2025-05-25 04:20:26.429320 | orchestrator |  "on the following host:", 2025-05-25 04:20:26.429331 | orchestrator |  "testbed-manager" 2025-05-25 04:20:26.429342 | orchestrator |  ] 2025-05-25 04:20:26.429353 | orchestrator | } 2025-05-25 04:20:26.429364 | orchestrator | 2025-05-25 04:20:26.429375 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:20:26.429388 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-05-25 04:20:26.429400 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-25 04:20:26.429411 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-25 04:20:26.429421 | orchestrator | 2025-05-25 04:20:26.429432 | orchestrator | 2025-05-25 04:20:26.429443 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:20:26.429454 | orchestrator | Sunday 25 May 2025 04:20:26 +0000 (0:00:00.559) 0:00:24.776 ************ 2025-05-25 04:20:26.429464 | orchestrator | =============================================================================== 2025-05-25 04:20:26.429475 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.44s 2025-05-25 04:20:26.429488 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.59s 2025-05-25 04:20:26.429506 | orchestrator | Aggregate test results step one ----------------------------------------- 1.52s 2025-05-25 04:20:26.429519 | orchestrator | Write report file ------------------------------------------------------- 1.21s 2025-05-25 04:20:26.429530 | orchestrator | Create report output directory ------------------------------------------ 0.90s 2025-05-25 04:20:26.429540 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.69s 2025-05-25 04:20:26.429551 | orchestrator | Aggregate test results step one ----------------------------------------- 0.67s 2025-05-25 04:20:26.429562 | orchestrator | Get timestamp for report file ------------------------------------------- 0.66s 2025-05-25 04:20:26.429576 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.62s 2025-05-25 04:20:26.429592 | orchestrator | Print report file information ------------------------------------------- 0.56s 2025-05-25 04:20:26.429604 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.56s 2025-05-25 04:20:26.429614 | orchestrator | Prepare test data ------------------------------------------------------- 0.53s 2025-05-25 04:20:26.429625 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.52s 2025-05-25 04:20:26.429642 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.50s 2025-05-25 04:20:26.429653 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.49s 2025-05-25 04:20:26.429664 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.49s 2025-05-25 04:20:26.429682 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2025-05-25 04:20:26.429702 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.47s 2025-05-25 04:20:26.429728 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.46s 2025-05-25 04:20:26.429739 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.42s 2025-05-25 04:20:26.673621 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-05-25 04:20:26.681975 | orchestrator | + set -e 2025-05-25 04:20:26.683339 | orchestrator | + source /opt/manager-vars.sh 2025-05-25 04:20:26.683376 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-25 04:20:26.683390 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-25 04:20:26.683403 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-25 04:20:26.683416 | orchestrator | ++ CEPH_VERSION=reef 2025-05-25 04:20:26.683429 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-25 04:20:26.683442 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-25 04:20:26.683454 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-25 04:20:26.683467 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-25 04:20:26.683479 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-25 04:20:26.683492 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-25 04:20:26.683506 | orchestrator | ++ export ARA=false 2025-05-25 04:20:26.683518 | orchestrator | ++ ARA=false 2025-05-25 04:20:26.683531 | orchestrator | ++ export TEMPEST=true 2025-05-25 04:20:26.683542 | orchestrator | ++ TEMPEST=true 2025-05-25 04:20:26.683549 | orchestrator | ++ export IS_ZUUL=true 2025-05-25 04:20:26.683556 | orchestrator | ++ IS_ZUUL=true 2025-05-25 04:20:26.683563 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.153 2025-05-25 04:20:26.683571 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.153 2025-05-25 04:20:26.683578 | orchestrator | ++ export EXTERNAL_API=false 2025-05-25 04:20:26.683586 | orchestrator | ++ EXTERNAL_API=false 2025-05-25 04:20:26.683593 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-25 04:20:26.683600 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-25 04:20:26.683607 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-25 04:20:26.683614 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-25 04:20:26.683621 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-25 04:20:26.683628 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-25 04:20:26.683635 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-25 04:20:26.683642 | orchestrator | + source /etc/os-release 2025-05-25 04:20:26.683649 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-05-25 04:20:26.683656 | orchestrator | ++ NAME=Ubuntu 2025-05-25 04:20:26.683663 | orchestrator | ++ VERSION_ID=24.04 2025-05-25 04:20:26.683670 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-05-25 04:20:26.683677 | orchestrator | ++ VERSION_CODENAME=noble 2025-05-25 04:20:26.683684 | orchestrator | ++ ID=ubuntu 2025-05-25 04:20:26.683692 | orchestrator | ++ ID_LIKE=debian 2025-05-25 04:20:26.683699 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-05-25 04:20:26.683706 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-05-25 04:20:26.683713 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-05-25 04:20:26.683721 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-05-25 04:20:26.683729 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-05-25 04:20:26.683736 | orchestrator | ++ LOGO=ubuntu-logo 2025-05-25 04:20:26.683743 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-05-25 04:20:26.683751 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-05-25 04:20:26.683759 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-05-25 04:20:26.696741 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-05-25 04:20:46.419709 | orchestrator | 2025-05-25 04:20:46.419822 | orchestrator | # Status of Elasticsearch 2025-05-25 04:20:46.419840 | orchestrator | 2025-05-25 04:20:46.419852 | orchestrator | + pushd /opt/configuration/contrib 2025-05-25 04:20:46.419865 | orchestrator | + echo 2025-05-25 04:20:46.419876 | orchestrator | + echo '# Status of Elasticsearch' 2025-05-25 04:20:46.419887 | orchestrator | + echo 2025-05-25 04:20:46.419899 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-05-25 04:20:46.602912 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-05-25 04:20:46.603196 | orchestrator | 2025-05-25 04:20:46.603222 | orchestrator | # Status of MariaDB 2025-05-25 04:20:46.603235 | orchestrator | 2025-05-25 04:20:46.603247 | orchestrator | + echo 2025-05-25 04:20:46.603313 | orchestrator | + echo '# Status of MariaDB' 2025-05-25 04:20:46.603327 | orchestrator | + echo 2025-05-25 04:20:46.603338 | orchestrator | + MARIADB_USER=root_shard_0 2025-05-25 04:20:46.603350 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-05-25 04:20:46.663538 | orchestrator | Reading package lists... 2025-05-25 04:20:46.975754 | orchestrator | Building dependency tree... 2025-05-25 04:20:46.975889 | orchestrator | Reading state information... 2025-05-25 04:20:47.341575 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-05-25 04:20:47.341688 | orchestrator | bc set to manually installed. 2025-05-25 04:20:47.341704 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-05-25 04:20:48.017349 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-05-25 04:20:48.017528 | orchestrator | 2025-05-25 04:20:48.017549 | orchestrator | # Status of Prometheus 2025-05-25 04:20:48.017562 | orchestrator | 2025-05-25 04:20:48.017573 | orchestrator | + echo 2025-05-25 04:20:48.017584 | orchestrator | + echo '# Status of Prometheus' 2025-05-25 04:20:48.017595 | orchestrator | + echo 2025-05-25 04:20:48.017606 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-05-25 04:20:48.081836 | orchestrator | Unauthorized 2025-05-25 04:20:48.085041 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-05-25 04:20:48.144541 | orchestrator | Unauthorized 2025-05-25 04:20:48.148492 | orchestrator | 2025-05-25 04:20:48.148553 | orchestrator | # Status of RabbitMQ 2025-05-25 04:20:48.148566 | orchestrator | 2025-05-25 04:20:48.148578 | orchestrator | + echo 2025-05-25 04:20:48.148589 | orchestrator | + echo '# Status of RabbitMQ' 2025-05-25 04:20:48.148600 | orchestrator | + echo 2025-05-25 04:20:48.148613 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-05-25 04:20:48.631469 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-05-25 04:20:48.640191 | orchestrator | 2025-05-25 04:20:48.640298 | orchestrator | # Status of Redis 2025-05-25 04:20:48.640315 | orchestrator | 2025-05-25 04:20:48.640326 | orchestrator | + echo 2025-05-25 04:20:48.640337 | orchestrator | + echo '# Status of Redis' 2025-05-25 04:20:48.640349 | orchestrator | + echo 2025-05-25 04:20:48.640361 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-05-25 04:20:48.650615 | orchestrator | TCP OK - 0.005 second response time on 192.168.16.10 port 6379|time=0.005477s;;;0.000000;10.000000 2025-05-25 04:20:48.651132 | orchestrator | 2025-05-25 04:20:48.651162 | orchestrator | + popd 2025-05-25 04:20:48.651173 | orchestrator | + echo 2025-05-25 04:20:48.651184 | orchestrator | # Create backup of MariaDB database 2025-05-25 04:20:48.651196 | orchestrator | 2025-05-25 04:20:48.651208 | orchestrator | + echo '# Create backup of MariaDB database' 2025-05-25 04:20:48.651219 | orchestrator | + echo 2025-05-25 04:20:48.651230 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-05-25 04:20:50.385509 | orchestrator | 2025-05-25 04:20:50 | INFO  | Task 3a88a2d0-f4be-4d0a-b85c-54c6450d257a (mariadb_backup) was prepared for execution. 2025-05-25 04:20:50.385633 | orchestrator | 2025-05-25 04:20:50 | INFO  | It takes a moment until task 3a88a2d0-f4be-4d0a-b85c-54c6450d257a (mariadb_backup) has been started and output is visible here. 2025-05-25 04:20:54.094105 | orchestrator | 2025-05-25 04:20:54.094388 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:20:54.095129 | orchestrator | 2025-05-25 04:20:54.096118 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:20:54.097525 | orchestrator | Sunday 25 May 2025 04:20:54 +0000 (0:00:00.137) 0:00:00.137 ************ 2025-05-25 04:20:54.236739 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:20:54.330724 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:20:54.333789 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:20:54.333841 | orchestrator | 2025-05-25 04:20:54.333855 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:20:54.335481 | orchestrator | Sunday 25 May 2025 04:20:54 +0000 (0:00:00.239) 0:00:00.377 ************ 2025-05-25 04:20:54.762309 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-25 04:20:54.763304 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-25 04:20:54.763627 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-25 04:20:54.764743 | orchestrator | 2025-05-25 04:20:54.764990 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-25 04:20:54.765616 | orchestrator | 2025-05-25 04:20:54.766439 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-25 04:20:54.766879 | orchestrator | Sunday 25 May 2025 04:20:54 +0000 (0:00:00.431) 0:00:00.808 ************ 2025-05-25 04:20:55.134211 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 04:20:55.134807 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-25 04:20:55.135668 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-25 04:20:55.136116 | orchestrator | 2025-05-25 04:20:55.138456 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-25 04:20:55.138529 | orchestrator | Sunday 25 May 2025 04:20:55 +0000 (0:00:00.370) 0:00:01.179 ************ 2025-05-25 04:20:55.585383 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:20:55.585792 | orchestrator | 2025-05-25 04:20:55.587196 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-05-25 04:20:55.587860 | orchestrator | Sunday 25 May 2025 04:20:55 +0000 (0:00:00.451) 0:00:01.631 ************ 2025-05-25 04:20:58.290285 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:20:58.292730 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:20:58.296095 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:20:58.297234 | orchestrator | 2025-05-25 04:20:58.298990 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-05-25 04:20:58.299688 | orchestrator | Sunday 25 May 2025 04:20:58 +0000 (0:00:02.701) 0:00:04.333 ************ 2025-05-25 04:22:25.525283 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-25 04:22:25.525400 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-05-25 04:22:25.527814 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-25 04:22:25.527908 | orchestrator | mariadb_bootstrap_restart 2025-05-25 04:22:25.599132 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:22:25.600048 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:22:25.600974 | orchestrator | changed: [testbed-node-0] 2025-05-25 04:22:25.602719 | orchestrator | 2025-05-25 04:22:25.603728 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-25 04:22:25.604267 | orchestrator | skipping: no hosts matched 2025-05-25 04:22:25.605146 | orchestrator | 2025-05-25 04:22:25.606874 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-25 04:22:25.606902 | orchestrator | skipping: no hosts matched 2025-05-25 04:22:25.607518 | orchestrator | 2025-05-25 04:22:25.608344 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-25 04:22:25.609392 | orchestrator | skipping: no hosts matched 2025-05-25 04:22:25.610178 | orchestrator | 2025-05-25 04:22:25.611451 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-25 04:22:25.612261 | orchestrator | 2025-05-25 04:22:25.612710 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-25 04:22:25.613896 | orchestrator | Sunday 25 May 2025 04:22:25 +0000 (0:01:27.308) 0:01:31.641 ************ 2025-05-25 04:22:25.772697 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:22:25.900719 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:22:25.900827 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:22:25.902483 | orchestrator | 2025-05-25 04:22:25.903405 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-25 04:22:25.904779 | orchestrator | Sunday 25 May 2025 04:22:25 +0000 (0:00:00.304) 0:01:31.945 ************ 2025-05-25 04:22:26.264427 | orchestrator | skipping: [testbed-node-0] 2025-05-25 04:22:26.306194 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:22:26.307314 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:22:26.307893 | orchestrator | 2025-05-25 04:22:26.309817 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:22:26.310962 | orchestrator | 2025-05-25 04:22:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 04:22:26.311017 | orchestrator | 2025-05-25 04:22:26 | INFO  | Please wait and do not abort execution. 2025-05-25 04:22:26.311488 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 04:22:26.312285 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 04:22:26.312849 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 04:22:26.313576 | orchestrator | 2025-05-25 04:22:26.313841 | orchestrator | 2025-05-25 04:22:26.314922 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:22:26.315848 | orchestrator | Sunday 25 May 2025 04:22:26 +0000 (0:00:00.406) 0:01:32.352 ************ 2025-05-25 04:22:26.316620 | orchestrator | =============================================================================== 2025-05-25 04:22:26.317566 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 87.31s 2025-05-25 04:22:26.318151 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.70s 2025-05-25 04:22:26.318716 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.45s 2025-05-25 04:22:26.319487 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-05-25 04:22:26.320245 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.41s 2025-05-25 04:22:26.320780 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.37s 2025-05-25 04:22:26.321330 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2025-05-25 04:22:26.321572 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.24s 2025-05-25 04:22:26.809630 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=incremental 2025-05-25 04:22:28.503909 | orchestrator | 2025-05-25 04:22:28 | INFO  | Task ab9b2725-bb52-4eb3-a36e-92570d06e8d5 (mariadb_backup) was prepared for execution. 2025-05-25 04:22:28.504032 | orchestrator | 2025-05-25 04:22:28 | INFO  | It takes a moment until task ab9b2725-bb52-4eb3-a36e-92570d06e8d5 (mariadb_backup) has been started and output is visible here. 2025-05-25 04:22:32.144415 | orchestrator | 2025-05-25 04:22:32.144530 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:22:32.144546 | orchestrator | 2025-05-25 04:22:32.145040 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:22:32.149424 | orchestrator | Sunday 25 May 2025 04:22:32 +0000 (0:00:00.142) 0:00:00.142 ************ 2025-05-25 04:22:32.289752 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:22:32.394463 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:22:32.394554 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:22:32.395465 | orchestrator | 2025-05-25 04:22:32.395835 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:22:32.396399 | orchestrator | Sunday 25 May 2025 04:22:32 +0000 (0:00:00.253) 0:00:00.395 ************ 2025-05-25 04:22:32.857683 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-25 04:22:32.857879 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-25 04:22:32.859047 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-25 04:22:32.860901 | orchestrator | 2025-05-25 04:22:32.861399 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-25 04:22:32.861849 | orchestrator | 2025-05-25 04:22:32.862401 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-25 04:22:32.862910 | orchestrator | Sunday 25 May 2025 04:22:32 +0000 (0:00:00.463) 0:00:00.859 ************ 2025-05-25 04:22:33.225675 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 04:22:33.229007 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-25 04:22:33.229756 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-25 04:22:33.230493 | orchestrator | 2025-05-25 04:22:33.231190 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-25 04:22:33.235152 | orchestrator | Sunday 25 May 2025 04:22:33 +0000 (0:00:00.365) 0:00:01.225 ************ 2025-05-25 04:22:33.678614 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:22:33.678716 | orchestrator | 2025-05-25 04:22:33.679341 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-05-25 04:22:33.679400 | orchestrator | Sunday 25 May 2025 04:22:33 +0000 (0:00:00.454) 0:00:01.680 ************ 2025-05-25 04:22:36.439465 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:22:36.440428 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:22:36.440515 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:22:36.440892 | orchestrator | 2025-05-25 04:22:36.441675 | orchestrator | TASK [mariadb : Taking incremental database backup via Mariabackup] ************ 2025-05-25 04:22:36.442647 | orchestrator | Sunday 25 May 2025 04:22:36 +0000 (0:00:02.755) 0:00:04.436 ************ 2025-05-25 04:22:40.879649 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:22:40.880552 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:22:40.882310 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 139", "rc": 139, "stderr": "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config file\nINFO:__main__:Kolla config strategy set to: COPY_ALWAYS\nINFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying service configuration files\nINFO:__main__:Deleting /etc/mysql/my.cnf\nINFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf\nINFO:__main__:Setting permission for /etc/mysql/my.cnf\nINFO:__main__:Writing out command to execute\nINFO:__main__:Setting permission for /var/log/kolla/mariadb\nINFO:__main__:Setting permission for /backup\n[00] 2025-05-25 04:22:40 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set\n[00] 2025-05-25 04:22:40 Using server version 10.11.13-MariaDB-deb12-log\nmariabackup based on MariaDB server 10.11.13-MariaDB debian-linux-gnu (x86_64)\n[00] 2025-05-25 04:22:40 incremental backup from 0 is enabled.\n[00] 2025-05-25 04:22:40 uses posix_fadvise().\n[00] 2025-05-25 04:22:40 cd to /var/lib/mysql/\n[00] 2025-05-25 04:22:40 open files limit requested 0, set to 1048576\n[00] 2025-05-25 04:22:40 mariabackup: using the following InnoDB configuration:\n[00] 2025-05-25 04:22:40 innodb_data_home_dir = \n[00] 2025-05-25 04:22:40 innodb_data_file_path = ibdata1:12M:autoextend\n[00] 2025-05-25 04:22:40 innodb_log_group_home_dir = ./\n[00] 2025-05-25 04:22:40 InnoDB: Using liburing\n2025-05-25 4:22:40 0 [Note] InnoDB: Number of transaction pools: 1\nmariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).\n2025-05-25 4:22:40 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF\n2025-05-25 4:22:40 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)\n250525 4:22:40 [ERROR] mariabackup got signal 11 ;\nSorry, we probably made a mistake, and this is a bug.\n\nYour assistance in bug reporting will enable us to fix this for the next release.\nTo report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report\na bug on https://jira.mariadb.org/.\n\nPlease include the information from the server start above, to the end of the\ninformation below.\n\nServer version: 10.11.13-MariaDB-deb12 source revision: 8fb09426b98583916ccfd4f8c49741adc115bac3\n\nThe information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/\ncontains instructions to obtain a better version of the backtrace below.\nFollowing these instructions will help MariaDB developers provide a fix quicker.\n\nAttempting backtrace. Include this in the bug report.\n(note: Retrieving this information may fail)\n\nThread pointer: 0x0\nstack_bottom = 0x0 thread_stack 0x49000\nPrinting to addr2line failed\nmariabackup(my_print_stacktrace+0x2e)[0x561bf10fe3ae]\nmariabackup(handle_fatal_signal+0x229)[0x561bf0c216d9]\n/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x7fa33f40e050]\nmariabackup(server_mysql_fetch_row+0x14)[0x561bf086d474]\nmariabackup(+0x76ca87)[0x561bf083fa87]\nmariabackup(+0x75f37a)[0x561bf083237a]\nmariabackup(main+0x163)[0x561bf07d7053]\n/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x7fa33f3f924a]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7fa33f3f9305]\nmariabackup(_start+0x21)[0x561bf081c161]\nWriting a core file...\nWorking directory at /var/lib/mysql\nResource Limits (excludes unlimited resources):\nLimit Soft Limit Hard Limit Units \nMax stack size 8388608 unlimited bytes \nMax open files 1048576 1048576 files \nMax locked memory 8388608 8388608 bytes \nMax pending signals 128077 128077 signals \nMax msgqueue size 819200 819200 bytes \nMax nice priority 0 0 \nMax realtime priority 0 0 \nCore pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E\n\nKernel version: Linux version 6.11.0-26-generic (buildd@lcy02-amd64-074) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #26~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 17 19:20:47 UTC 2\n\n/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"\n 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\"\n", "stderr_lines": ["INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying service configuration files", "INFO:__main__:Deleting /etc/mysql/my.cnf", "INFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf", "INFO:__main__:Setting permission for /etc/mysql/my.cnf", "INFO:__main__:Writing out command to execute", "INFO:__main__:Setting permission for /var/log/kolla/mariadb", "INFO:__main__:Setting permission for /backup", "[00] 2025-05-25 04:22:40 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set", "[00] 2025-05-25 04:22:40 Using server version 10.11.13-MariaDB-deb12-log", "mariabackup based on MariaDB server 10.11.13-MariaDB debian-linux-gnu (x86_64)", "[00] 2025-05-25 04:22:40 incremental backup from 0 is enabled.", "[00] 2025-05-25 04:22:40 uses posix_fadvise().", "[00] 2025-05-25 04:22:40 cd to /var/lib/mysql/", "[00] 2025-05-25 04:22:40 open files limit requested 0, set to 1048576", "[00] 2025-05-25 04:22:40 mariabackup: using the following InnoDB configuration:", "[00] 2025-05-25 04:22:40 innodb_data_home_dir = ", "[00] 2025-05-25 04:22:40 innodb_data_file_path = ibdata1:12M:autoextend", "[00] 2025-05-25 04:22:40 innodb_log_group_home_dir = ./", "[00] 2025-05-25 04:22:40 InnoDB: Using liburing", "2025-05-25 4:22:40 0 [Note] InnoDB: Number of transaction pools: 1", "mariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).", "2025-05-25 4:22:40 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF", "2025-05-25 4:22:40 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)", "250525 4:22:40 [ERROR] mariabackup got signal 11 ;", "Sorry, we probably made a mistake, and this is a bug.", "", "Your assistance in bug reporting will enable us to fix this for the next release.", "To report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report", "a bug on https://jira.mariadb.org/.", "", "Please include the information from the server start above, to the end of the", "information below.", "", "Server version: 10.11.13-MariaDB-deb12 source revision: 8fb09426b98583916ccfd4f8c49741adc115bac3", "", "The information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/", "contains instructions to obtain a better version of the backtrace below.", "Following these instructions will help MariaDB developers provide a fix quicker.", "", "Attempting backtrace. Include this in the bug report.", "(note: Retrieving this information may fail)", "", "Thread pointer: 0x0", "stack_bottom = 0x0 thread_stack 0x49000", "Printing to addr2line failed", "mariabackup(my_print_stacktrace+0x2e)[0x561bf10fe3ae]", "mariabackup(handle_fatal_signal+0x229)[0x561bf0c216d9]", "/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x7fa33f40e050]", "mariabackup(server_mysql_fetch_row+0x14)[0x561bf086d474]", "mariabackup(+0x76ca87)[0x561bf083fa87]", "mariabackup(+0x75f37a)[0x561bf083237a]", "mariabackup(main+0x163)[0x561bf07d7053]", "/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x7fa33f3f924a]", "/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7fa33f3f9305]", "mariabackup(_start+0x21)[0x561bf081c161]", "Writing a core file...", "Working directory at /var/lib/mysql", "Resource Limits (excludes unlimited resources):", "Limit Soft Limit Hard Limit Units ", "Max stack size 8388608 unlimited bytes ", "Max open files 1048576 1048576 files ", "Max locked memory 8388608 8388608 bytes ", "Max pending signals 128077 128077 signals ", "Max msgqueue size 819200 819200 bytes ", "Max nice priority 0 0 ", "Max realtime priority 0 0 ", "Core pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E", "", "Kernel version: Linux version 6.11.0-26-generic (buildd@lcy02-amd64-074) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #26~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 17 19:20:47 UTC 2", "", "/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"", " 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\""], "stdout": "Taking an incremental backup\n", "stdout_lines": ["Taking an incremental backup"]} 2025-05-25 04:22:41.042175 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-25 04:22:41.043021 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-05-25 04:22:41.043952 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-25 04:22:41.045056 | orchestrator | mariadb_bootstrap_restart 2025-05-25 04:22:41.119868 | orchestrator | 2025-05-25 04:22:41.122362 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-25 04:22:41.127245 | orchestrator | skipping: no hosts matched 2025-05-25 04:22:41.128510 | orchestrator | 2025-05-25 04:22:41.129264 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-25 04:22:41.130066 | orchestrator | skipping: no hosts matched 2025-05-25 04:22:41.131309 | orchestrator | 2025-05-25 04:22:41.133728 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-25 04:22:41.134189 | orchestrator | skipping: no hosts matched 2025-05-25 04:22:41.137617 | orchestrator | 2025-05-25 04:22:41.139910 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-25 04:22:41.140239 | orchestrator | 2025-05-25 04:22:41.142278 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-25 04:22:41.143576 | orchestrator | Sunday 25 May 2025 04:22:41 +0000 (0:00:04.684) 0:00:09.120 ************ 2025-05-25 04:22:41.333207 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:22:41.333569 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:22:41.334305 | orchestrator | 2025-05-25 04:22:41.335761 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-25 04:22:41.335831 | orchestrator | Sunday 25 May 2025 04:22:41 +0000 (0:00:00.213) 0:00:09.334 ************ 2025-05-25 04:22:41.454983 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:22:41.456004 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:22:41.456146 | orchestrator | 2025-05-25 04:22:41.457092 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:22:41.457747 | orchestrator | 2025-05-25 04:22:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 04:22:41.458085 | orchestrator | 2025-05-25 04:22:41 | INFO  | Please wait and do not abort execution. 2025-05-25 04:22:41.459473 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-25 04:22:41.460250 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 04:22:41.461113 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 04:22:41.462948 | orchestrator | 2025-05-25 04:22:41.463791 | orchestrator | 2025-05-25 04:22:41.464325 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:22:41.464862 | orchestrator | Sunday 25 May 2025 04:22:41 +0000 (0:00:00.121) 0:00:09.455 ************ 2025-05-25 04:22:41.465362 | orchestrator | =============================================================================== 2025-05-25 04:22:41.465824 | orchestrator | mariadb : Taking incremental database backup via Mariabackup ------------ 4.68s 2025-05-25 04:22:41.466124 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.76s 2025-05-25 04:22:41.466616 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2025-05-25 04:22:41.467622 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.45s 2025-05-25 04:22:41.467650 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.37s 2025-05-25 04:22:41.467771 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2025-05-25 04:22:41.468307 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.21s 2025-05-25 04:22:41.468710 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.12s 2025-05-25 04:22:41.799487 | orchestrator | 2025-05-25 04:22:41 | INFO  | Task 7fee25ae-d29e-4d32-a720-5d48c33780e3 (mariadb_backup) was prepared for execution. 2025-05-25 04:22:41.799612 | orchestrator | 2025-05-25 04:22:41 | INFO  | It takes a moment until task 7fee25ae-d29e-4d32-a720-5d48c33780e3 (mariadb_backup) has been started and output is visible here. 2025-05-25 04:22:45.609668 | orchestrator | 2025-05-25 04:22:45.609788 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 04:22:45.613590 | orchestrator | 2025-05-25 04:22:45.614510 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 04:22:45.614990 | orchestrator | Sunday 25 May 2025 04:22:45 +0000 (0:00:00.180) 0:00:00.180 ************ 2025-05-25 04:22:45.798292 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:22:45.921574 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:22:45.925340 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:22:45.925697 | orchestrator | 2025-05-25 04:22:45.925725 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 04:22:45.925757 | orchestrator | Sunday 25 May 2025 04:22:45 +0000 (0:00:00.314) 0:00:00.494 ************ 2025-05-25 04:22:46.513935 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-25 04:22:46.515129 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-25 04:22:46.516403 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-25 04:22:46.519055 | orchestrator | 2025-05-25 04:22:46.519788 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-25 04:22:46.520660 | orchestrator | 2025-05-25 04:22:46.520872 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-25 04:22:46.521196 | orchestrator | Sunday 25 May 2025 04:22:46 +0000 (0:00:00.590) 0:00:01.085 ************ 2025-05-25 04:22:46.905735 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 04:22:46.907283 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-25 04:22:46.909095 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-25 04:22:46.910616 | orchestrator | 2025-05-25 04:22:46.911783 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-25 04:22:46.912664 | orchestrator | Sunday 25 May 2025 04:22:46 +0000 (0:00:00.394) 0:00:01.479 ************ 2025-05-25 04:22:47.417221 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 04:22:47.417767 | orchestrator | 2025-05-25 04:22:47.417971 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-05-25 04:22:47.418288 | orchestrator | Sunday 25 May 2025 04:22:47 +0000 (0:00:00.510) 0:00:01.990 ************ 2025-05-25 04:22:50.514294 | orchestrator | ok: [testbed-node-1] 2025-05-25 04:22:50.514936 | orchestrator | ok: [testbed-node-0] 2025-05-25 04:22:50.516349 | orchestrator | ok: [testbed-node-2] 2025-05-25 04:22:50.518596 | orchestrator | 2025-05-25 04:22:50.519753 | orchestrator | TASK [mariadb : Taking incremental database backup via Mariabackup] ************ 2025-05-25 04:22:50.520499 | orchestrator | Sunday 25 May 2025 04:22:50 +0000 (0:00:03.093) 0:00:05.083 ************ 2025-05-25 04:22:54.924839 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:22:54.926118 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:22:54.928066 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 139", "rc": 139, "stderr": "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config file\nINFO:__main__:Kolla config strategy set to: COPY_ALWAYS\nINFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying service configuration files\nINFO:__main__:Deleting /etc/mysql/my.cnf\nINFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf\nINFO:__main__:Setting permission for /etc/mysql/my.cnf\nINFO:__main__:Writing out command to execute\nINFO:__main__:Setting permission for /var/log/kolla/mariadb\nINFO:__main__:Setting permission for /backup\n[00] 2025-05-25 04:22:54 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set\n[00] 2025-05-25 04:22:54 Using server version 10.11.13-MariaDB-deb12-log\nmariabackup based on MariaDB server 10.11.13-MariaDB debian-linux-gnu (x86_64)\n[00] 2025-05-25 04:22:54 incremental backup from 0 is enabled.\n[00] 2025-05-25 04:22:54 uses posix_fadvise().\n[00] 2025-05-25 04:22:54 cd to /var/lib/mysql/\n[00] 2025-05-25 04:22:54 open files limit requested 0, set to 1048576\n[00] 2025-05-25 04:22:54 mariabackup: using the following InnoDB configuration:\n[00] 2025-05-25 04:22:54 innodb_data_home_dir = \n[00] 2025-05-25 04:22:54 innodb_data_file_path = ibdata1:12M:autoextend\n[00] 2025-05-25 04:22:54 innodb_log_group_home_dir = ./\n[00] 2025-05-25 04:22:54 InnoDB: Using liburing\n2025-05-25 4:22:54 0 [Note] InnoDB: Number of transaction pools: 1\nmariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).\n2025-05-25 4:22:54 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF\n2025-05-25 4:22:54 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)\n250525 4:22:54 [ERROR] mariabackup got signal 11 ;\nSorry, we probably made a mistake, and this is a bug.\n\nYour assistance in bug reporting will enable us to fix this for the next release.\nTo report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report\na bug on https://jira.mariadb.org/.\n\nPlease include the information from the server start above, to the end of the\ninformation below.\n\nServer version: 10.11.13-MariaDB-deb12 source revision: 8fb09426b98583916ccfd4f8c49741adc115bac3\n\nThe information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/\ncontains instructions to obtain a better version of the backtrace below.\nFollowing these instructions will help MariaDB developers provide a fix quicker.\n\nAttempting backtrace. Include this in the bug report.\n(note: Retrieving this information may fail)\n\nThread pointer: 0x0\nstack_bottom = 0x0 thread_stack 0x49000\nPrinting to addr2line failed\nmariabackup(my_print_stacktrace+0x2e)[0x648b93e6a3ae]\nmariabackup(handle_fatal_signal+0x229)[0x648b9398d6d9]\n/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x7c1c141c4050]\nmariabackup(server_mysql_fetch_row+0x14)[0x648b935d9474]\nmariabackup(+0x76ca87)[0x648b935aba87]\nmariabackup(+0x75f37a)[0x648b9359e37a]\nmariabackup(main+0x163)[0x648b93543053]\n/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x7c1c141af24a]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7c1c141af305]\nmariabackup(_start+0x21)[0x648b93588161]\nWriting a core file...\nWorking directory at /var/lib/mysql\nResource Limits (excludes unlimited resources):\nLimit Soft Limit Hard Limit Units \nMax stack size 8388608 unlimited bytes \nMax open files 1048576 1048576 files \nMax locked memory 8388608 8388608 bytes \nMax pending signals 128077 128077 signals \nMax msgqueue size 819200 819200 bytes \nMax nice priority 0 0 \nMax realtime priority 0 0 \nCore pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E\n\nKernel version: Linux version 6.11.0-26-generic (buildd@lcy02-amd64-074) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #26~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 17 19:20:47 UTC 2\n\n/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"\n 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\"\n", "stderr_lines": ["INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying service configuration files", "INFO:__main__:Deleting /etc/mysql/my.cnf", "INFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf", "INFO:__main__:Setting permission for /etc/mysql/my.cnf", "INFO:__main__:Writing out command to execute", "INFO:__main__:Setting permission for /var/log/kolla/mariadb", "INFO:__main__:Setting permission for /backup", "[00] 2025-05-25 04:22:54 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set", "[00] 2025-05-25 04:22:54 Using server version 10.11.13-MariaDB-deb12-log", "mariabackup based on MariaDB server 10.11.13-MariaDB debian-linux-gnu (x86_64)", "[00] 2025-05-25 04:22:54 incremental backup from 0 is enabled.", "[00] 2025-05-25 04:22:54 uses posix_fadvise().", "[00] 2025-05-25 04:22:54 cd to /var/lib/mysql/", "[00] 2025-05-25 04:22:54 open files limit requested 0, set to 1048576", "[00] 2025-05-25 04:22:54 mariabackup: using the following InnoDB configuration:", "[00] 2025-05-25 04:22:54 innodb_data_home_dir = ", "[00] 2025-05-25 04:22:54 innodb_data_file_path = ibdata1:12M:autoextend", "[00] 2025-05-25 04:22:54 innodb_log_group_home_dir = ./", "[00] 2025-05-25 04:22:54 InnoDB: Using liburing", "2025-05-25 4:22:54 0 [Note] InnoDB: Number of transaction pools: 1", "mariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).", "2025-05-25 4:22:54 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF", "2025-05-25 4:22:54 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)", "250525 4:22:54 [ERROR] mariabackup got signal 11 ;", "Sorry, we probably made a mistake, and this is a bug.", "", "Your assistance in bug reporting will enable us to fix this for the next release.", "To report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report", "a bug on https://jira.mariadb.org/.", "", "Please include the information from the server start above, to the end of the", "information below.", "", "Server version: 10.11.13-MariaDB-deb12 source revision: 8fb09426b98583916ccfd4f8c49741adc115bac3", "", "The information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/", "contains instructions to obtain a better version of the backtrace below.", "Following these instructions will help MariaDB developers provide a fix quicker.", "", "Attempting backtrace. Include this in the bug report.", "(note: Retrieving this information may fail)", "", "Thread pointer: 0x0", "stack_bottom = 0x0 thread_stack 0x49000", "Printing to addr2line failed", "mariabackup(my_print_stacktrace+0x2e)[0x648b93e6a3ae]", "mariabackup(handle_fatal_signal+0x229)[0x648b9398d6d9]", "/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x7c1c141c4050]", "mariabackup(server_mysql_fetch_row+0x14)[0x648b935d9474]", "mariabackup(+0x76ca87)[0x648b935aba87]", "mariabackup(+0x75f37a)[0x648b9359e37a]", "mariabackup(main+0x163)[0x648b93543053]", "/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x7c1c141af24a]", "/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7c1c141af305]", "mariabackup(_start+0x21)[0x648b93588161]", "Writing a core file...", "Working directory at /var/lib/mysql", "Resource Limits (excludes unlimited resources):", "Limit Soft Limit Hard Limit Units ", "Max stack size 8388608 unlimited bytes ", "Max open files 1048576 1048576 files ", "Max locked memory 8388608 8388608 bytes ", "Max pending signals 128077 128077 signals ", "Max msgqueue size 819200 819200 bytes ", "Max nice priority 0 0 ", "Max realtime priority 0 0 ", "Core pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E", "", "Kernel version: Linux version 6.11.0-26-generic (buildd@lcy02-amd64-074) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #26~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 17 19:20:47 UTC 2", "", "/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"", " 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\""], "stdout": "Taking an incremental backup\n", "stdout_lines": ["Taking an incremental backup"]} 2025-05-25 04:22:55.113022 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-25 04:22:55.113237 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-05-25 04:22:55.113258 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-25 04:22:55.113786 | orchestrator | mariadb_bootstrap_restart 2025-05-25 04:22:55.211659 | orchestrator | 2025-05-25 04:22:55.211764 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-25 04:22:55.211780 | orchestrator | skipping: no hosts matched 2025-05-25 04:22:55.211792 | orchestrator | 2025-05-25 04:22:55.213666 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-25 04:22:55.214064 | orchestrator | skipping: no hosts matched 2025-05-25 04:22:55.214166 | orchestrator | 2025-05-25 04:22:55.214652 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-25 04:22:55.214800 | orchestrator | skipping: no hosts matched 2025-05-25 04:22:55.215027 | orchestrator | 2025-05-25 04:22:55.215370 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-25 04:22:55.216990 | orchestrator | 2025-05-25 04:22:55.217101 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-25 04:22:55.217120 | orchestrator | Sunday 25 May 2025 04:22:55 +0000 (0:00:04.697) 0:00:09.781 ************ 2025-05-25 04:22:55.399993 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:22:55.400097 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:22:55.400640 | orchestrator | 2025-05-25 04:22:55.401464 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-25 04:22:55.401996 | orchestrator | Sunday 25 May 2025 04:22:55 +0000 (0:00:00.193) 0:00:09.975 ************ 2025-05-25 04:22:55.524818 | orchestrator | skipping: [testbed-node-1] 2025-05-25 04:22:55.524915 | orchestrator | skipping: [testbed-node-2] 2025-05-25 04:22:55.525519 | orchestrator | 2025-05-25 04:22:55.525545 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 04:22:55.525741 | orchestrator | 2025-05-25 04:22:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 04:22:55.525860 | orchestrator | 2025-05-25 04:22:55 | INFO  | Please wait and do not abort execution. 2025-05-25 04:22:55.526942 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-25 04:22:55.527597 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 04:22:55.528119 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 04:22:55.529527 | orchestrator | 2025-05-25 04:22:55.530129 | orchestrator | 2025-05-25 04:22:55.530865 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 04:22:55.531537 | orchestrator | Sunday 25 May 2025 04:22:55 +0000 (0:00:00.121) 0:00:10.097 ************ 2025-05-25 04:22:55.532786 | orchestrator | =============================================================================== 2025-05-25 04:22:55.533752 | orchestrator | mariadb : Taking incremental database backup via Mariabackup ------------ 4.70s 2025-05-25 04:22:55.534484 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.09s 2025-05-25 04:22:55.535036 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2025-05-25 04:22:55.535486 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.51s 2025-05-25 04:22:55.536441 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2025-05-25 04:22:55.537318 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-05-25 04:22:55.537935 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.19s 2025-05-25 04:22:55.539261 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.12s 2025-05-25 04:22:56.351076 | orchestrator | ERROR 2025-05-25 04:22:56.351507 | orchestrator | { 2025-05-25 04:22:56.351603 | orchestrator | "delta": "0:04:22.175806", 2025-05-25 04:22:56.351663 | orchestrator | "end": "2025-05-25 04:22:56.172720", 2025-05-25 04:22:56.351714 | orchestrator | "msg": "non-zero return code", 2025-05-25 04:22:56.351764 | orchestrator | "rc": 2, 2025-05-25 04:22:56.351813 | orchestrator | "start": "2025-05-25 04:18:33.996914" 2025-05-25 04:22:56.351859 | orchestrator | } failure 2025-05-25 04:22:56.391307 | 2025-05-25 04:22:56.391443 | PLAY RECAP 2025-05-25 04:22:56.391521 | orchestrator | ok: 23 changed: 10 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-05-25 04:22:56.391565 | 2025-05-25 04:22:56.615338 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-25 04:22:56.616767 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-25 04:22:57.357195 | 2025-05-25 04:22:57.357380 | PLAY [Post output play] 2025-05-25 04:22:57.374678 | 2025-05-25 04:22:57.374863 | LOOP [stage-output : Register sources] 2025-05-25 04:22:57.437325 | 2025-05-25 04:22:57.437687 | TASK [stage-output : Check sudo] 2025-05-25 04:22:58.306889 | orchestrator | sudo: a password is required 2025-05-25 04:22:58.477473 | orchestrator | ok: Runtime: 0:00:00.016508 2025-05-25 04:22:58.493493 | 2025-05-25 04:22:58.493660 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-25 04:22:58.543901 | 2025-05-25 04:22:58.544224 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-25 04:22:58.611527 | orchestrator | ok 2025-05-25 04:22:58.619861 | 2025-05-25 04:22:58.620005 | LOOP [stage-output : Ensure target folders exist] 2025-05-25 04:22:59.092001 | orchestrator | ok: "docs" 2025-05-25 04:22:59.092298 | 2025-05-25 04:22:59.359617 | orchestrator | ok: "artifacts" 2025-05-25 04:22:59.613240 | orchestrator | ok: "logs" 2025-05-25 04:22:59.635191 | 2025-05-25 04:22:59.635355 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-25 04:22:59.673859 | 2025-05-25 04:22:59.674148 | TASK [stage-output : Make all log files readable] 2025-05-25 04:22:59.971578 | orchestrator | ok 2025-05-25 04:22:59.981659 | 2025-05-25 04:22:59.981846 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-25 04:23:00.017344 | orchestrator | skipping: Conditional result was False 2025-05-25 04:23:00.031946 | 2025-05-25 04:23:00.032124 | TASK [stage-output : Discover log files for compression] 2025-05-25 04:23:00.057424 | orchestrator | skipping: Conditional result was False 2025-05-25 04:23:00.070210 | 2025-05-25 04:23:00.070357 | LOOP [stage-output : Archive everything from logs] 2025-05-25 04:23:00.114531 | 2025-05-25 04:23:00.114715 | PLAY [Post cleanup play] 2025-05-25 04:23:00.122770 | 2025-05-25 04:23:00.122902 | TASK [Set cloud fact (Zuul deployment)] 2025-05-25 04:23:00.185800 | orchestrator | ok 2025-05-25 04:23:00.195706 | 2025-05-25 04:23:00.195812 | TASK [Set cloud fact (local deployment)] 2025-05-25 04:23:00.231604 | orchestrator | skipping: Conditional result was False 2025-05-25 04:23:00.246232 | 2025-05-25 04:23:00.246390 | TASK [Clean the cloud environment] 2025-05-25 04:23:00.842565 | orchestrator | 2025-05-25 04:23:00 - clean up servers 2025-05-25 04:23:01.619183 | orchestrator | 2025-05-25 04:23:01 - testbed-manager 2025-05-25 04:23:01.712612 | orchestrator | 2025-05-25 04:23:01 - testbed-node-3 2025-05-25 04:23:01.802507 | orchestrator | 2025-05-25 04:23:01 - testbed-node-1 2025-05-25 04:23:01.896761 | orchestrator | 2025-05-25 04:23:01 - testbed-node-5 2025-05-25 04:23:01.990675 | orchestrator | 2025-05-25 04:23:01 - testbed-node-4 2025-05-25 04:23:02.084596 | orchestrator | 2025-05-25 04:23:02 - testbed-node-2 2025-05-25 04:23:02.172562 | orchestrator | 2025-05-25 04:23:02 - testbed-node-0 2025-05-25 04:23:02.268686 | orchestrator | 2025-05-25 04:23:02 - clean up keypairs 2025-05-25 04:23:02.287549 | orchestrator | 2025-05-25 04:23:02 - testbed 2025-05-25 04:23:02.311311 | orchestrator | 2025-05-25 04:23:02 - wait for servers to be gone 2025-05-25 04:23:13.112093 | orchestrator | 2025-05-25 04:23:13 - clean up ports 2025-05-25 04:23:13.286813 | orchestrator | 2025-05-25 04:23:13 - 28667457-5ca7-4e22-839b-f680c968aae9 2025-05-25 04:23:13.532529 | orchestrator | 2025-05-25 04:23:13 - 3abd4f4d-f126-4ff3-8d2b-b15a69c75e14 2025-05-25 04:23:13.820336 | orchestrator | 2025-05-25 04:23:13 - 61b297c4-642c-4505-bdb7-5a431deec364 2025-05-25 04:23:14.062988 | orchestrator | 2025-05-25 04:23:14 - 803f7b98-fbf4-4206-8ed3-e92ead5af94c 2025-05-25 04:23:14.275236 | orchestrator | 2025-05-25 04:23:14 - ba7f66bc-6830-45a7-893f-b443cf7beb30 2025-05-25 04:23:14.645519 | orchestrator | 2025-05-25 04:23:14 - e53b6fd3-a3e0-4a90-be49-0c9856876627 2025-05-25 04:23:14.877289 | orchestrator | 2025-05-25 04:23:14 - ff9166fc-e8fe-4f86-8b10-8eda5cb423d2 2025-05-25 04:23:15.111227 | orchestrator | 2025-05-25 04:23:15 - clean up volumes 2025-05-25 04:23:15.226782 | orchestrator | 2025-05-25 04:23:15 - testbed-volume-manager-base 2025-05-25 04:23:15.273625 | orchestrator | 2025-05-25 04:23:15 - testbed-volume-3-node-base 2025-05-25 04:23:15.316829 | orchestrator | 2025-05-25 04:23:15 - testbed-volume-0-node-base 2025-05-25 04:23:15.364569 | orchestrator | 2025-05-25 04:23:15 - testbed-volume-2-node-base 2025-05-25 04:23:15.406548 | orchestrator | 2025-05-25 04:23:15 - testbed-volume-4-node-base 2025-05-25 04:23:15.446860 | orchestrator | 2025-05-25 04:23:15 - testbed-volume-5-node-base 2025-05-25 04:23:15.491931 | orchestrator | 2025-05-25 04:23:15 - testbed-volume-1-node-base 2025-05-25 04:23:15.533507 | orchestrator | 2025-05-25 04:23:15 - testbed-volume-0-node-3 2025-05-25 04:23:15.573360 | orchestrator | 2025-05-25 04:23:15 - testbed-volume-6-node-3 2025-05-25 04:23:15.621570 | orchestrator | 2025-05-25 04:23:15 - testbed-volume-7-node-4 2025-05-25 04:23:15.663800 | orchestrator | 2025-05-25 04:23:15 - testbed-volume-5-node-5 2025-05-25 04:23:15.700703 | orchestrator | 2025-05-25 04:23:15 - testbed-volume-8-node-5 2025-05-25 04:23:15.743349 | orchestrator | 2025-05-25 04:23:15 - testbed-volume-3-node-3 2025-05-25 04:23:15.785098 | orchestrator | 2025-05-25 04:23:15 - testbed-volume-4-node-4 2025-05-25 04:23:15.825818 | orchestrator | 2025-05-25 04:23:15 - testbed-volume-2-node-5 2025-05-25 04:23:15.870763 | orchestrator | 2025-05-25 04:23:15 - testbed-volume-1-node-4 2025-05-25 04:23:15.912772 | orchestrator | 2025-05-25 04:23:15 - disconnect routers 2025-05-25 04:23:16.014941 | orchestrator | 2025-05-25 04:23:16 - testbed 2025-05-25 04:23:17.158961 | orchestrator | 2025-05-25 04:23:17 - clean up subnets 2025-05-25 04:23:17.196513 | orchestrator | 2025-05-25 04:23:17 - subnet-testbed-management 2025-05-25 04:23:17.370381 | orchestrator | 2025-05-25 04:23:17 - clean up networks 2025-05-25 04:23:17.561016 | orchestrator | 2025-05-25 04:23:17 - net-testbed-management 2025-05-25 04:23:17.849965 | orchestrator | 2025-05-25 04:23:17 - clean up security groups 2025-05-25 04:23:17.894413 | orchestrator | 2025-05-25 04:23:17 - testbed-management 2025-05-25 04:23:18.037776 | orchestrator | 2025-05-25 04:23:18 - testbed-node 2025-05-25 04:23:18.151594 | orchestrator | 2025-05-25 04:23:18 - clean up floating ips 2025-05-25 04:23:18.185243 | orchestrator | 2025-05-25 04:23:18 - 81.163.192.153 2025-05-25 04:23:18.560001 | orchestrator | 2025-05-25 04:23:18 - clean up routers 2025-05-25 04:23:18.665443 | orchestrator | 2025-05-25 04:23:18 - testbed 2025-05-25 04:23:19.806735 | orchestrator | ok: Runtime: 0:00:19.008232 2025-05-25 04:23:19.811445 | 2025-05-25 04:23:19.811613 | PLAY RECAP 2025-05-25 04:23:19.811735 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-25 04:23:19.811796 | 2025-05-25 04:23:19.947449 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-25 04:23:19.948474 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-25 04:23:20.685375 | 2025-05-25 04:23:20.685551 | PLAY [Cleanup play] 2025-05-25 04:23:20.702509 | 2025-05-25 04:23:20.702668 | TASK [Set cloud fact (Zuul deployment)] 2025-05-25 04:23:20.763027 | orchestrator | ok 2025-05-25 04:23:20.773688 | 2025-05-25 04:23:20.773902 | TASK [Set cloud fact (local deployment)] 2025-05-25 04:23:20.810093 | orchestrator | skipping: Conditional result was False 2025-05-25 04:23:20.830627 | 2025-05-25 04:23:20.830804 | TASK [Clean the cloud environment] 2025-05-25 04:23:21.983538 | orchestrator | 2025-05-25 04:23:21 - clean up servers 2025-05-25 04:23:22.449514 | orchestrator | 2025-05-25 04:23:22 - clean up keypairs 2025-05-25 04:23:22.465087 | orchestrator | 2025-05-25 04:23:22 - wait for servers to be gone 2025-05-25 04:23:22.502795 | orchestrator | 2025-05-25 04:23:22 - clean up ports 2025-05-25 04:23:22.583773 | orchestrator | 2025-05-25 04:23:22 - clean up volumes 2025-05-25 04:23:22.656334 | orchestrator | 2025-05-25 04:23:22 - disconnect routers 2025-05-25 04:23:22.677633 | orchestrator | 2025-05-25 04:23:22 - clean up subnets 2025-05-25 04:23:22.699037 | orchestrator | 2025-05-25 04:23:22 - clean up networks 2025-05-25 04:23:23.281285 | orchestrator | 2025-05-25 04:23:23 - clean up security groups 2025-05-25 04:23:23.316981 | orchestrator | 2025-05-25 04:23:23 - clean up floating ips 2025-05-25 04:23:23.343077 | orchestrator | 2025-05-25 04:23:23 - clean up routers 2025-05-25 04:23:23.873516 | orchestrator | ok: Runtime: 0:00:01.741090 2025-05-25 04:23:23.878954 | 2025-05-25 04:23:23.879213 | PLAY RECAP 2025-05-25 04:23:23.879398 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-25 04:23:23.879509 | 2025-05-25 04:23:24.014713 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-25 04:23:24.017221 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-25 04:23:24.804301 | 2025-05-25 04:23:24.804482 | PLAY [Base post-fetch] 2025-05-25 04:23:24.821414 | 2025-05-25 04:23:24.821566 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-25 04:23:24.877372 | orchestrator | skipping: Conditional result was False 2025-05-25 04:23:24.893581 | 2025-05-25 04:23:24.893806 | TASK [fetch-output : Set log path for single node] 2025-05-25 04:23:24.943268 | orchestrator | ok 2025-05-25 04:23:24.956093 | 2025-05-25 04:23:24.956275 | LOOP [fetch-output : Ensure local output dirs] 2025-05-25 04:23:25.444149 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/75d22ebf6c3e48d2a89c8d4ea630ef96/work/logs" 2025-05-25 04:23:25.726043 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/75d22ebf6c3e48d2a89c8d4ea630ef96/work/artifacts" 2025-05-25 04:23:26.012202 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/75d22ebf6c3e48d2a89c8d4ea630ef96/work/docs" 2025-05-25 04:23:26.035539 | 2025-05-25 04:23:26.035702 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-25 04:23:27.009537 | orchestrator | changed: .d..t...... ./ 2025-05-25 04:23:27.009812 | orchestrator | changed: All items complete 2025-05-25 04:23:27.009854 | 2025-05-25 04:23:27.757521 | orchestrator | changed: .d..t...... ./ 2025-05-25 04:23:28.499259 | orchestrator | changed: .d..t...... ./ 2025-05-25 04:23:28.526547 | 2025-05-25 04:23:28.526749 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-25 04:23:28.564678 | orchestrator | skipping: Conditional result was False 2025-05-25 04:23:28.567529 | orchestrator | skipping: Conditional result was False 2025-05-25 04:23:28.593921 | 2025-05-25 04:23:28.594077 | PLAY RECAP 2025-05-25 04:23:28.594164 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-25 04:23:28.594209 | 2025-05-25 04:23:28.720128 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-25 04:23:28.722550 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-25 04:23:29.490607 | 2025-05-25 04:23:29.490784 | PLAY [Base post] 2025-05-25 04:23:29.505690 | 2025-05-25 04:23:29.505840 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-25 04:23:30.480883 | orchestrator | changed 2025-05-25 04:23:30.491102 | 2025-05-25 04:23:30.491242 | PLAY RECAP 2025-05-25 04:23:30.491316 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-25 04:23:30.491392 | 2025-05-25 04:23:30.617204 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-25 04:23:30.618275 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-25 04:23:31.425304 | 2025-05-25 04:23:31.425489 | PLAY [Base post-logs] 2025-05-25 04:23:31.437364 | 2025-05-25 04:23:31.437529 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-25 04:23:31.885644 | localhost | changed 2025-05-25 04:23:31.904902 | 2025-05-25 04:23:31.905183 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-25 04:23:31.932843 | localhost | ok 2025-05-25 04:23:31.937959 | 2025-05-25 04:23:31.938125 | TASK [Set zuul-log-path fact] 2025-05-25 04:23:31.954531 | localhost | ok 2025-05-25 04:23:31.964891 | 2025-05-25 04:23:31.965031 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-25 04:23:31.990093 | localhost | ok 2025-05-25 04:23:31.995229 | 2025-05-25 04:23:31.995379 | TASK [upload-logs : Create log directories] 2025-05-25 04:23:32.505773 | localhost | changed 2025-05-25 04:23:32.511434 | 2025-05-25 04:23:32.511602 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-25 04:23:33.003243 | localhost -> localhost | ok: Runtime: 0:00:00.006090 2025-05-25 04:23:33.009148 | 2025-05-25 04:23:33.009304 | TASK [upload-logs : Upload logs to log server] 2025-05-25 04:23:33.580221 | localhost | Output suppressed because no_log was given 2025-05-25 04:23:33.583068 | 2025-05-25 04:23:33.583226 | LOOP [upload-logs : Compress console log and json output] 2025-05-25 04:23:33.642364 | localhost | skipping: Conditional result was False 2025-05-25 04:23:33.647572 | localhost | skipping: Conditional result was False 2025-05-25 04:23:33.654533 | 2025-05-25 04:23:33.654761 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-25 04:23:33.705529 | localhost | skipping: Conditional result was False 2025-05-25 04:23:33.706348 | 2025-05-25 04:23:33.710185 | localhost | skipping: Conditional result was False 2025-05-25 04:23:33.718217 | 2025-05-25 04:23:33.718387 | LOOP [upload-logs : Upload console log and json output]